CN112057861B - Virtual object control method and device, computer equipment and storage medium - Google Patents

Virtual object control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112057861B
CN112057861B CN202010953355.8A CN202010953355A CN112057861B CN 112057861 B CN112057861 B CN 112057861B CN 202010953355 A CN202010953355 A CN 202010953355A CN 112057861 B CN112057861 B CN 112057861B
Authority
CN
China
Prior art keywords
virtual object
virtual
scene
thumbnail map
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010953355.8A
Other languages
Chinese (zh)
Other versions
CN112057861A (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010953355.8A priority Critical patent/CN112057861B/en
Publication of CN112057861A publication Critical patent/CN112057861A/en
Application granted granted Critical
Publication of CN112057861B publication Critical patent/CN112057861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8029Fighting without shooting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a virtual object control method and device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying the first virtual object and the second virtual object, receiving an attack instruction of the first virtual object to the second virtual object, and controlling the life value of the first virtual object to be reduced if the first number of times that the first virtual object kills the own virtual object or the second number of times that the second virtual object is killed by the own virtual object is not less than the reference number of times. The virtual object can attack other virtual objects belonging to the same camp, so that the authenticity of a virtual scene is improved, and when two virtual objects belonging to the same camp are attacked, the number of times that the attacking party virtual object kills teammates is too large, or the number of times that the attacked party virtual object is killed by the teammates is too large, the life value of the attacking party virtual object is controlled to be reduced, and malicious attack between the virtual objects belonging to the same camp is avoided.

Description

Virtual object control method and device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a virtual object control method, a virtual object control device, computer equipment and a storage medium.
Background
With the wide spread of electronic games, the functions in the electronic games are more and more abundant. Usually, the virtual scene of the electronic game comprises at least two opposite camps, and virtual objects belonging to different camps can attack each other to realize the fight between different camps. However, only the virtual objects belonging to different camps mutually attack each other, the attack mode is single, and the method does not conform to the actual battle scene and lacks of authenticity.
Disclosure of Invention
The embodiment of the application provides a virtual object control method and device, computer equipment and a storage medium, which can improve the reality of a virtual scene. The technical scheme is as follows:
in one aspect, a virtual object control method is provided, and the method includes:
displaying a first virtual object and a second virtual object, wherein the first virtual object and the second virtual object belong to the same camp;
receiving an attack instruction of the first virtual object to the second virtual object;
and if the first number of times that the first virtual object kills the own virtual object or the second number of times that the second virtual object is killed by the own virtual object is not less than the reference number of times, controlling the life value of the first virtual object to be reduced.
In another aspect, a virtual object control method is provided, the method including:
displaying a virtual scene interface that includes the first virtual object and does not include a thumbnail map;
receiving a release instruction of a virtual scout machine which the first virtual object belongs to;
displaying a thumbnail map of the virtual scene in the virtual scene interface, wherein the thumbnail map comprises position identifications of all virtual objects in the virtual scene.
In one possible implementation manner, the receiving a release instruction of a virtual scout in which the first virtual object belongs includes:
and receiving the release instruction sent by the server, wherein the release instruction is sent to the server after other equipment detects the release operation on the virtual scout, and the other virtual objects and the first virtual object belong to the same camp.
In another aspect, there is provided a virtual object control apparatus, the apparatus including:
the display module is used for displaying a first virtual object and a second virtual object, and the first virtual object and the second virtual object belong to the same camp;
the receiving module is used for receiving an attack instruction of the first virtual object to the second virtual object;
the first control module is used for controlling the life value of the first virtual object to be reduced if the first number of times that the first virtual object kills the own virtual object or the second number of times that the second virtual object is killed by the own virtual object is not less than the reference number of times.
In one possible implementation, the apparatus further includes:
and the second control module is used for controlling the life value of the second virtual object to be reduced if the first frequency and the second frequency are both smaller than the reference frequency.
In another possible implementation manner, the first control module includes:
a control unit, configured to control the life value of the first virtual object to decrease by a reference value if the first number or the second number is not less than the reference number.
In another possible implementation manner, the first control module includes:
a control unit, configured to control the life value of the first virtual object to reduce the reference scale if the first number or the second number is not less than the reference number.
In another possible implementation manner, the receiving module includes:
a detection unit configured to detect a shooting operation of the first virtual object on the second virtual object using a virtual firearm; alternatively, the first and second electrodes may be,
the detection unit is further used for detecting the chopping operation of the first virtual object on the second virtual object by using a virtual tool.
In another possible implementation manner, the receiving module includes:
and the detection unit is used for detecting the trigger operation on the attack option under the condition that the second virtual object is selected.
In another aspect, there is provided a virtual object control apparatus, the apparatus including:
a display module to display a virtual scene interface, the virtual scene interface including the first virtual object and not including a thumbnail map;
the receiving module is used for receiving a release instruction of a virtual scout plane which is strutted by the first virtual object;
the display module is configured to display a thumbnail map of the virtual scene in the virtual scene interface, where the thumbnail map includes location identifiers of virtual objects in the virtual scene.
In one possible implementation, the receiving module includes:
a detection unit configured to detect a release operation for a virtual scout owned by the first virtual object.
In another possible implementation manner, the receiving module includes:
the receiving unit is used for receiving the release instruction sent by the server, the release instruction is sent to the server after other equipment detects the release operation on the virtual scout, and the other virtual objects and the first virtual object belong to the same camp.
In another possible implementation manner, the apparatus further includes:
and the adding module is used for adding the position identification of each virtual object in the thumbnail map according to the position of each virtual object in the virtual scene.
In another possible implementation manner, the adding module includes:
a determining unit, configured to determine, according to a position of any one second virtual object in the virtual scene and a scaling between the virtual scene and the thumbnail map, a target position of the second virtual object in the thumbnail map;
and the adding unit is used for adding the position identifier of the second virtual object in the thumbnail map according to the target position.
In another possible implementation manner, the virtual scene includes a plurality of first reference positions, the thumbnail map includes a plurality of second reference positions, and the plurality of first reference positions are in one-to-one correspondence with the plurality of second reference positions;
the determining unit is used for determining a first distance between the position of the second virtual object in the virtual scene and each first reference position; according to the scaling, scaling the plurality of first distances to obtain a plurality of second distances; and positioning the target position according to the plurality of second reference positions and the corresponding second distances so as to enable the target position to be separated from each second reference position by the corresponding second distance.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to implement the operations performed in the virtual object control method according to the above aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, the at least one program code being loaded and executed by a processor to implement the operations performed in the virtual object control method according to the above aspect.
In yet another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The processor of the computer apparatus reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer apparatus implements the operations performed in the virtual object control method as described in the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
according to the method, the device, the computer equipment and the storage medium provided by the embodiment of the application, in a real scene, a person can attack teammates, therefore, in order to simulate the real scene, in a virtual scene, the virtual object can attack other virtual objects belonging to the same marketing, so that the authenticity of the virtual scene is improved, and when two virtual objects belonging to the same marketing are attacked, the frequency of attacking the teammates by the virtual object of an attacking party is too much, or the frequency of attacking the virtual object of the attacked party by the teammates is too much, the life value of the virtual object of the attacking party is controlled to be reduced, so that the virtual object of the attacking party is punished, and malicious attack between the virtual objects belonging to the same marketing is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a virtual object control method provided in an embodiment of the present application;
fig. 3 is a flowchart of a virtual object control method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a mode selection interface provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a virtual scene interface provided in an embodiment of the present application;
fig. 6 is a flowchart of a virtual object control method according to an embodiment of the present application;
fig. 7 is a flowchart of a virtual object control method provided in an embodiment of the present application;
fig. 8 is a flowchart of a virtual object control method provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a virtual scene interface provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of a virtual scene provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a thumbnail map provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a virtual scene interface provided in an embodiment of the present application;
fig. 13 is a flowchart of a virtual object control method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
The terms "first," "second," and the like as used herein may be used herein to describe various concepts that are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first virtual object may be referred to as a second virtual object, and similarly, a second virtual object may be referred to as a first virtual object, without departing from the scope of the present application.
As used herein, the terms "at least one," "a plurality," "each," and "any," at least one of which includes one, two, or more than two, and a plurality of which includes two or more than two, each of which refers to each of the corresponding plurality, and any of which refers to any of the plurality. For example, the plurality of reference positions includes 3 reference positions, each of the 3 reference positions refers to each of the 3 reference positions, and any one of the 3 reference positions refers to any one of the 3 reference positions, which may be a first reference position, a second reference position, or a third reference position.
The virtual scene related to the present application may be used to simulate a three-dimensional virtual space, which may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the virtual scene may also include virtual objects, such as buildings, vehicles, and props for arming themselves or weapons required for fighting with other virtual objects. The virtual scene can also be used for simulating real environments in different weathers, such as sunny days, rainy days, foggy days or nights.
The user may control the virtual object to move in the virtual scene, the virtual object may be an avatar in the virtual scene for representing the user, and the avatar may be in any form, such as human, animal, etc., which is not limited in this application. Taking a shooting game as an example, the user may control the virtual object to freely fall, glide, open a parachute to fall, run, jump, crawl over land, or control the virtual object to swim, float, or dive in the sea, or the like, in the sky of the virtual scene. The user can also control the virtual object to enter and exit the building in the virtual scene, find and pick up the virtual article (e.g., weapon and other items) in the virtual scene, so as to fight with other virtual objects through the picked virtual article, for example, the virtual article may be clothing, helmet, bullet-proof clothing, medical supplies, cold weapons, hot weapons, or the like, or may be a virtual article left after other virtual objects are eliminated. The above scenarios are merely illustrative, and the embodiments of the present application are not limited to this.
In the embodiment of the application, an electronic game scene is taken as an example, a user can operate on the terminal in advance, after the terminal detects the operation of the user, a game configuration file of the electronic game can be downloaded, and the game configuration file can include an application program, interface display data or virtual scene data of the electronic game, so that the user can call the game configuration file when logging in the electronic game on the terminal to render and display an electronic game interface. A user may perform a touch operation on a terminal, and after the terminal detects the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game data, where the game data may include virtual scene data, behavior data of a virtual object in the virtual scene, and the like.
The terminal can display the virtual scene in a full screen mode when rendering and displaying the virtual scene, and can also independently display a global map in a first preset area of a current display interface when displaying the virtual scene in the current display interface. The global map is used for displaying a thumbnail of the virtual scene, and the thumbnail is used for describing geographic features such as terrain, landform and geographic position corresponding to the virtual scene. Of course, the terminal can also display the thumbnail of the virtual scene within a certain distance around the current virtual object on the current display interface, and when the click operation on the global map is detected, the thumbnail of the whole virtual scene is displayed in the second preset area of the current display interface of the terminal, so that the user can view not only the surrounding virtual scene but also the whole virtual scene. When the terminal detects the zooming operation on the complete thumbnail, the terminal can also zoom and display the complete thumbnail. The specific display position and shape of the first preset area and the second preset area can be set according to the operation habit of a user. For example, in order not to cause excessive occlusion to a virtual scene, the first preset region may be a rectangular region at the upper right corner, the lower right corner, the upper left corner, or the lower left corner of the current display interface, and the second preset region may be a square region at the right side or the left side of the current display interface, and of course, the first preset region and the second preset region may also be circular regions or regions with other shapes, and the specific display position and shape of the preset region are not limited in the embodiment of the present application.
Fig. 1 is a schematic structural diagram of an implementation environment provided in an embodiment of the present application, and as shown in fig. 1, the implementation environment includes a terminal 101 and a server 102. Optionally, the terminal 101 is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like, but is not limited thereto. Optionally, the server 102 is an independent physical server, or the server 102 is a server cluster or a distributed system formed by a plurality of physical servers, or the server 102 is a cloud server providing basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, CDN (Content Delivery Network), big data and artificial intelligence platform. The terminal 101 and the server 102 are directly or indirectly connected by wired or wireless communication, and the present application is not limited thereto.
The server 102 provides the terminal 101 with a virtual scene, the terminal 101 is capable of displaying virtual objects, virtual items, and the like through the virtual scene provided by the server 102, and the terminal 101 provides an operating environment for the user to detect an operation performed by the user. The server 102 can perform background processing for the operation detected by the terminal, and provide background support for the terminal 101.
Alternatively, the terminal 101 installs a game application served by the server 102, through which the terminal 101 and the server 102 can interact. The terminal 101 runs the game application, provides an operation environment for the game application for a user, can detect the operation of the user on the game application, sends an operation instruction to the server 102, the server 102 responds according to the operation instruction, returns a response result to the terminal 101, and the terminal 101 displays the response result, so that man-machine interaction is achieved.
The method provided by the embodiment of the application can be used for electronic game scenes.
For example, in a shooting game scenario:
the terminal runs a shooting game, the terminal can control the virtual object to shoot other virtual objects in the virtual scene, in order to guarantee the sense of the real scene, the terminal can control the virtual object to shoot the virtual object which is banked by the own party so as to simulate the effect that teammates can attack under the real scene, and in order to avoid malicious attack between the virtual objects which belong to the same banked party, the virtual object control method provided by the embodiment of the application is adopted, when the number of times that the virtual object controlled by the terminal kills the virtual object is too large, or when the number of times that the attacked virtual object is killed by the virtual object of the own party is too large, the life value of the virtual object controlled by the terminal to the attacker is reduced, so that the game can be normally carried out.
Fig. 2 is a flowchart of a virtual object control method provided in an embodiment of the present application, and is applied to a terminal, as shown in fig. 2, the method includes:
201. the terminal displays the first virtual object and the second virtual object.
In the embodiment of the application, the virtual scene comprises at least two virtual objects in opposite camps, and the virtual objects belonging to different camps can be attacked. Furthermore, in order to enhance the reality of the virtual scene, attacks can be made between virtual objects belonging to the same row in the virtual scene. For example, the virtual scene includes a virtual object 1, a virtual object 2, a virtual object 3, and a virtual object 4, the virtual object 1 and the virtual object 2 belong to a first camp, the virtual object 3 and the virtual object 4 belong to a second camp, the first camp and the second camp are different camps, and the virtual object 1 can attack the virtual object 3 or the virtual object 4 and can also attack the virtual object 2.
In the embodiment of the present application, taking the first virtual object and the second virtual object as an example, the first virtual object and the second virtual object belong to the same formation, that is, a teammate relationship is formed between the first virtual object and the second virtual object.
202. And the terminal receives an attack instruction of the first virtual object to the second virtual object.
The attack instruction is used for indicating the first virtual object to attack the second virtual object.
203. And if the first frequency of the first virtual object killing the own virtual object or the second frequency of the second virtual object killing by the own virtual object is not less than the reference frequency, the terminal controls the life value of the first virtual object to be reduced.
Wherein, one virtual object attacks another virtual object, which results in the life value of another virtual object decreasing to 0, indicating that the virtual object kills the another virtual object once. The first number represents the total number of times that the first virtual object kills other virtual objects belonging to the same camp, and the second number represents the total number of times that the second virtual object is killed by other virtual objects belonging to the same camp.
According to the method provided by the embodiment of the application, under a real scene, a person can attack teammates, and therefore, in order to simulate the real scene, in a virtual scene, the virtual object can attack other virtual objects belonging to the same battle, so that the authenticity of the virtual scene is improved, and when the attack is carried out between two virtual objects belonging to the same battle, the number of times that the attacker kills the teammates is too large by the attacker virtual object, or the number of times that the attacker mates the teammates the attacker, the life value of the attacker virtual object is controlled to be reduced, the attacker virtual object is punished, and the malicious attack between the virtual objects belonging to the same battle is avoided.
Fig. 3 is a flowchart of a virtual object control method provided in an embodiment of the present application, and is applied to a terminal, as shown in fig. 3, the method includes:
301. the terminal displays the first virtual object and the second virtual object.
In the embodiment of the application, the first virtual object is a virtual object controlled by a terminal, and the second virtual object is a virtual object controlled by other terminals; or the second virtual object is a virtual object controlled by a terminal, and the first virtual object is a virtual object controlled by other terminals.
In one possible implementation, this step 301 includes: the terminal displays a virtual scene interface, which includes a first virtual object and a second virtual object.
The virtual scene interface is used for displaying a virtual scene and a virtual object in the virtual scene. Taking the terminal to control the first virtual object as an example, if the first person perspective of the first virtual object is used to display the virtual scene, displaying a part of the virtual scene in the perspective range of the first virtual object and a second virtual object in the part of the virtual scene on the virtual scene interface; and if the virtual scene is displayed at the third personal perspective, displaying a part of the virtual scene in the perspective range of the third personal perspective, and the first virtual object and the second virtual object which are positioned in the part of the virtual scene on the virtual scene interface.
Alternatively, by performing a view angle rotation operation on the virtual scene interface, virtual scenes of different areas can be displayed in the virtual scene interface. And responding to the visual angle rotation operation of the virtual scene interface, and displaying a partial scene area in the visual angle range after the visual angle is rotated in the virtual scene interface.
In one possible implementation, before step 301, the method further includes: the terminal displays a mode selection interface, the mode selection interface comprises a plurality of competition modes and confirmation options, and when any competition mode is selected, the terminal detects the trigger operation of the confirmation options and displays a virtual scene interface, wherein the virtual scene interface comprises a first virtual object and a second virtual object.
The mode selection interface is used for displaying a plurality of competition modes, and winning conditions of different competition modes are different. For example, the winning condition in one of the competition modes is to kill all the virtual objects that the other party has been strutted, and the winning condition in the other competition mode is to seize a target area in the virtual scene. The various competition modes are all competition modes under a simulated real scene, all belong to a rigid core mode, and under the condition that any one competition mode is selected, the virtual object can attack the own virtual object in the virtual scene.
And selecting the competition mode on the mode selection interface, so that the virtual objects in multi-party formation can compete in the virtual scene according to the selected competition mode.
As shown in fig. 4, a plurality of competition modes including just-verified tactical team competition, just-verified site competition and just-verified hot site competition are displayed in the mode selection interface, and after any one of the competition models enters a virtual scene, competition is performed according to the corresponding competition mode, so that the game wins.
302. And the terminal receives an attack instruction of the first virtual object to the second virtual object.
The manner in which the attack instruction is received at the first virtual object controlled by the terminal is different from the manner in which the attack instruction is received at the second virtual object controlled by the terminal.
In a possible implementation manner, the first virtual object is a virtual object controlled by a terminal, and the second virtual object is a virtual object controlled by another terminal, then the step 302 includes: in the case that the second virtual object is selected, a trigger operation on the attack option is detected.
The attack option is an option displayed on the virtual scene interface, for example, the virtual option is a button, a slider, or the like. And under the condition that the second virtual object is selected, detecting the trigger operation on the attack option to indicate that the first virtual object is about to execute the attack operation on the second virtual object, so that the terminal generates the attack instruction to control the first virtual object to attack the second virtual object according to the attack instruction.
Optionally, the attack option is a skill button of the first virtual object. And detecting the trigger operation on the skill button under the condition that the second virtual object is selected, and indicating that the first virtual object is about to release the skill corresponding to the skill button to attack the second virtual object, so that the terminal generates the attack instruction, and the first virtual object is controlled to use the skill corresponding to the skill button to attack the second virtual object according to the attack instruction.
In a possible implementation manner, the first virtual object is a virtual object controlled by a terminal, and the second virtual object is a virtual object controlled by another terminal, then the step 302 includes: detecting a shooting operation of the first virtual object on the second virtual object by using the virtual gun; alternatively, a slash operation of the first virtual object on the second virtual object using the virtual tool is detected.
The virtual gun is a gun in a virtual scene, and the virtual tool is a tool in the virtual scene. In the virtual scene, when the virtual gun is held by the virtual object, other virtual objects can be shot, or when the virtual cutter is held by the virtual object, other virtual objects can be hacked, so that the shooting of the gun or the hacking of the cutter in the real environment can be simulated, and the reality of the virtual scene can be enhanced.
In a possible implementation manner, the second virtual object is a virtual object controlled by a terminal, and the first virtual object is a virtual object controlled by another terminal, then the step 302 includes: other terminals detect the attack operation of the first virtual object on the second virtual object, send an attack instruction to the server, the attack instruction carries the first virtual object identifier and the second virtual object identifier, the server forwards the attack instruction to the terminal, and the terminal receives the attack instruction.
The first virtual object identification is used for indicating a first virtual object, and the second virtual object identification is used for indicating a second virtual object. When other terminals control the second virtual object to attack the first virtual object, the server forwards an attack instruction to ensure the synchronization of the instructions in different terminals so as to synchronize the virtual scenes displayed in different terminals subsequently.
It should be noted that after step 302 is executed, it is determined that the life value of the first virtual object or the second virtual object decreases according to whether the first number of times that the first virtual object kills the own virtual object or the second number of times that the second virtual object is killed by the own virtual object satisfies the reference condition, that is, after step 302, the following step 303 or 304 is executed.
303. And if the first frequency of the first virtual object killing the own virtual object or the second frequency of the second virtual object killing by the own virtual object is not less than the reference frequency, the terminal controls the life value of the first virtual object to be reduced.
The own virtual object of the first virtual object is a virtual object belonging to the same marketing as the first virtual object; the own virtual object of the second virtual object is a virtual object belonging to the same camp as the second virtual object.
Optionally, when the first number is greater than 1, the first virtual object kills one own virtual object multiple times, or the first virtual object kills multiple own virtual objects. For example, in the virtual scene, the virtual object 1, the virtual object 2, the virtual objects 3 and 4 belong to the same camp, and the first frequency corresponding to the virtual object 1 is 2, which means that the virtual object 1 kills the virtual object 2 twice, or the virtual object 1 kills the virtual object 3 twice, or the virtual object 1 kills the virtual object 2 and the virtual object 3 once respectively.
Optionally, when the second number is greater than 1, the second virtual object is killed by any other virtual object for multiple times, or the second virtual object is killed by multiple other virtual objects. For example, in the virtual scene, the virtual object 1, the virtual object 2, and the virtual object 3 belong to the same camp, and the second frequency corresponding to the virtual object 1 is 2, which means that the virtual object 1 is killed twice by the virtual object 2, or the virtual object 1 is killed twice by the virtual object 3, or the virtual object 1 is killed once by each of the virtual object 2 and the virtual object 3.
The reference number is an arbitrary number, for example, the reference number is 2 or 3, or the like. If the first frequency is not less than the reference frequency, the frequency of killing the own virtual object by the first virtual object reaches the maximum value, and if the second frequency is not less than the reference frequency, the frequency of killing the second virtual object by the own virtual object reaches the maximum value, at this time, the life value of the second virtual object is not controlled to be reduced, but the life value of the first virtual object is controlled to be reduced, so that the normal operation of the virtual scene is ensured, and under the condition of ensuring the authenticity of the virtual scene, malicious attacks among a plurality of virtual objects belonging to the same marketing are avoided.
In one possible implementation, the attack instruction carries a reference value, and the step 303 includes: and if the first time or the second time is not less than the reference time, controlling the life value of the first virtual object to be reduced by the reference value. Wherein the reference value is any value, for example, the reference value is 30 or 60, etc.
Optionally, the reference value is determined according to a type of an attack operation of the first virtual object on the second virtual object. For example, the attack operation is a shooting operation, and each time the shooting operation is performed, the life value of the virtual object can be reduced by 30, and then 30 is used as a reference value; the attack operation is a hacking operation, and if the life value of the virtual object can be reduced by 20 every time the hacking operation is performed, 20 is used as a reference value.
And determining the reference value through the type of the attack operation, so that the life value of the first virtual object is controlled to be reduced according to the type of the attack operation subsequently, and the attack effect of the first virtual object on the second virtual object is reflected in the virtual scene, but the life value of the first virtual object is reduced, so that the attack effect transfer is reflected.
In one possible implementation, the attack instruction carries a reference scale, and the step 303 includes: and if the first time or the second time is not less than the reference time, controlling the life value of the first virtual object to reduce the reference scale. Here, the reference ratio is an arbitrary value, for example, the reference ratio is 30% or 45%, and the like.
Optionally, the reference proportion is determined according to a type of attack operation of the first virtual object on the second virtual object. For example, the attack operation is a shooting operation, and the reference ratio is determined to be 30%; the attack operation is a hacking operation, and the reference ratio is determined to be 20%.
In one possible implementation, this step 303 includes: and if the first frequency of killing the own virtual object by the first virtual object or the second frequency of killing the second virtual object by the own virtual object is not less than the reference frequency, the terminal controls the life value of the first virtual object to be reduced, and a picture of reducing the life value of the first virtual object is displayed in the virtual scene interface.
In addition, in real scenes, after a person is injured, the state of the person is affected, for example, the person is inconvenient to move, or blood continuously flows from a wound of the person. In order to enhance the reality of the virtual scene, in one possible implementation, in response to the current life value of the first virtual object being less than the maximum life value of the first virtual object, the moving speed of the first virtual object is reduced, and the life value of the first virtual object is controlled to be continuously reduced.
When the current life value of the first virtual object is smaller than the maximum life value of the first virtual object, the first virtual object is attacked, so that in order to simulate that a person is injured in a real scene, the state of the person is affected, the moving speed of the first virtual object is reduced, and the life value of the first virtual object is controlled to be continuously reduced, so that the reality of the virtual scene is enhanced.
304. And if the first times and the second times are both smaller than the reference times, the terminal controls the life value of the second virtual object to be reduced.
If the first frequency and the second frequency are both smaller than the reference frequency, the frequency that the first virtual object kills the own virtual object does not reach the maximum value, and the frequency that the second virtual object is killed by the own virtual object does not reach the maximum value, the attack of the first virtual object on the second virtual object does not reach the malicious attack degree, therefore, in order to simulate the real scene, the teammates can be attacked, the life value of the attacked second virtual object is controlled to be reduced, the attack between the virtual objects belonging to the same battle is realized in the virtual scene, and the authenticity of the virtual scene is enhanced.
In one possible implementation, the attack instruction carries a reference value, and the step 304 includes: and if the first frequency and the second frequency are both smaller than the reference frequency, controlling the life value of the first virtual object to be reduced by the reference value.
In one possible implementation, this step 304 includes: and if the first frequency and the second frequency are both smaller than the reference frequency, the terminal controls the life value of the second virtual object to be reduced, and displays a picture of the reduction of the life value of the second virtual object in the virtual scene interface. As shown in fig. 5, the first virtual object performs a shooting operation to the second virtual object, and in the virtual scene interface, a screen in which the life value of the second virtual object decreases is displayed.
In one possible implementation, in response to the current life value of the second virtual object being less than the maximum life value of the second virtual object, the moving speed of the second virtual object is reduced, and the life value of the second virtual object is controlled to be continuously reduced.
In addition, in the game scene, if the current game is finished, in the virtual scene interface, playing the game picture of each clicked and killed virtual object in the game process, or playing the game picture of the last clicked and killed virtual object after the game is finished.
The method provided by the embodiment of the application can be applied to shooting games, and the virtual objects are controlled to attack the virtual objects which are listed in the opponent battle by using the hot weapons in the shooting games. In addition, in the embodiment of the application, in order to simulate a real gunfight scene and enhance the reality of a virtual scene, the virtual scene comprises a virtual gun simulating a real environment, the life value of a virtual object in the virtual scene is low, the life value of the virtual object cannot be automatically replied, and the virtual object can attack the own virtual object by using the virtual gun to damage teammates, so that the real virtual scene is realized. Further, since a person cannot view the life value of another person in the real scene, in order to simulate the real scene, a HUD (head up display) is restricted in the virtual scene interface, and the life value of the virtual object, the nickname of the virtual object, and the like are not displayed in the virtual scene interface any longer, so that the virtual scene is displayed in Hardcore (real mode), thereby enhancing the reality of the virtual scene.
According to the method provided by the embodiment of the application, under a real scene, a person can attack teammates, and therefore, in order to simulate the real scene, in a virtual scene, the virtual object can attack other virtual objects belonging to the same battle, so that the authenticity of the virtual scene is improved, and when the attack is carried out between two virtual objects belonging to the same battle, the number of times that the attacker kills the teammates is too large by the attacker virtual object, or the number of times that the attacker mates the teammates the attacker, the life value of the attacker virtual object is controlled to be reduced, the attacker virtual object is punished, and the malicious attack between the virtual objects belonging to the same battle is avoided.
It should be noted that, in the embodiment of the present application, the terminal controls the life value of the first virtual object or the second virtual object to decrease according to the first number of times of the first virtual object or the second number of times of the second virtual object after receiving the attack instruction, and in another embodiment, the server determines the life value of the first virtual object or the second virtual object to decrease according to the first number of times of the first virtual object or the second number of times of the second virtual object, and sends a life value decreasing instruction to the corresponding terminal, so that the corresponding terminal controls the corresponding life value to decrease.
Fig. 6 is a flowchart of a virtual object control method provided in an embodiment of the present application, and as shown in fig. 6, the method includes:
1. after detecting the attack operation of the first virtual object on a second virtual object controlled by other terminals, the first terminal sends an attack instruction to the server, wherein the attack instruction carries the first virtual object identifier and the second virtual object identifier.
The first terminal controls the first virtual object, the second terminal controls the second virtual object, and the first virtual object and the second virtual object belong to the same camp.
2. The server queries the database according to the attack instruction, determines the first times of the first virtual object and the second times of the second virtual object, sends a life value reduction instruction to the first terminal if the first times or the second times are not less than the reference times, and sends the life value reduction instruction to the second terminal if the first times or the second times are not less than the reference times.
3. In response to the first terminal receiving the vital value reduction instruction, the first terminal controls the vital value of the first virtual object to be reduced.
4. And in response to the second terminal receiving the life value reduction instruction, the second terminal controls the life value of the second virtual object to be reduced.
Fig. 7 is a flowchart of a virtual object control method provided in an embodiment of the present application, and is applied to a terminal, as shown in fig. 7, the method includes:
701. the terminal displays a virtual scene interface, which includes the first virtual object and does not include the thumbnail map.
The thumbnail map is a small map of the virtual scene, and is obtained by zooming the virtual scene according to the zooming scale.
702. And the terminal receives a release instruction of the virtual scout plane which the first virtual object belongs to.
Wherein the virtual scout is a scout in a virtual scene. For example, the virtual scout is an UAV (Unmanned Aerial Vehicle). Optionally, the virtual scout is a virtual drone, a virtual airplane, or the like. The virtual scout machine is used for scouting the positions of all the virtual objects in the virtual scene.
703. And the terminal displays a thumbnail map of the virtual scene in the virtual scene interface, wherein the thumbnail map comprises position identifications of all virtual objects in the virtual scene.
Wherein the position identifier is used for indicating the position of the virtual object. The location of any one or more virtual objects is determined by displaying a thumbnail map of the virtual scene in the virtual scene interface for the user to view the thumbnail map.
According to the method provided by the embodiment of the application, when a person is in any area, the person cannot know the real scene outside the visual field range of the person, and if the person can know the real scene outside the visual field range of the person by looking at the map of the area, the thumbnail map of the virtual scene is not displayed when the virtual scene interface is displayed, so that a user cannot know the information of other parts of the virtual scene except the part of the virtual scene displayed by the virtual scene interface, and the thumbnail map is displayed in the virtual scene interface under the condition that the virtual scout machine of the local virtual object belonging to the virtual scene is released, so that the positions of all the virtual objects in the virtual scene are prompted, the reality of the virtual scene is enhanced, and the interestingness of the virtual scene is increased.
Fig. 8 is a flowchart of a virtual object control method provided in an embodiment of the present application, and is applied to a terminal, as shown in fig. 8, the method includes:
801. the terminal displays a virtual scene interface, which includes the first virtual object and does not include the thumbnail map.
In a real scene, a plurality of people are located at different positions of the same area, and due to the fact that the area is blocked by a building, under the condition that any person does not acquire a map, the person cannot know the positions of other people. Therefore, the thumbnail map is not displayed in the virtual scene interface so as to simulate the situation that a person cannot know the position of other people in the real scene, and the reality of the virtual scene is enhanced.
Wherein the thumbnail map is a small map of the virtual scene. Optionally, a top view angle is adopted to capture a picture of the entire virtual scene, and the captured virtual scene is zoomed according to a zoom scale, so as to obtain the thumbnail map. Optionally, the thumbnail map includes positions where virtual buildings, virtual roads, and virtual objects in the virtual scene are located. If the thumbnail map is displayed in the virtual scene interface, a user can know the position of each virtual object in the virtual scene, the position of the virtual object controlled by the user and a route for controlling the virtual object to move from the current position to the target position by looking up the thumbnail map. Fig. 9 is a schematic diagram of a virtual scene interface, which is displayed after each game starts, and in which a first virtual object is displayed, a thumbnail map is not displayed, and skill options of the virtual object are displayed in the virtual scene interface.
802. And the terminal receives a release instruction of the virtual scout plane which the first virtual object belongs to.
In the embodiment of the application, a plurality of virtual objects in the formation to which the first virtual object belongs have virtual scout machines. The release instruction is used for instructing to release the virtual scout machine owned by any virtual object in the camp to which the first virtual object belongs. Optionally, the virtual scout is a virtual scout owned by the first virtual object, or a virtual scout owned by another virtual object in the campsite to which the first virtual object belongs.
In one possible implementation manner, the first virtual object is a terminal-controlled virtual object, and the step 802 includes: a release operation is detected for a virtual scout owned by a first virtual object.
Optionally, a touch operation on a release option of a virtual scout displayed in the virtual scene interface is detected. Wherein the release option is used to release the virtual scout that the first virtual object owns. Optionally, the release option releases the skill button of the owned virtual scout for the first virtual object. And releasing the virtual scout plane owned by the first virtual object through touch operation on the release option so as to display the thumbnail map of the virtual scene subsequently.
In one possible implementation, this step 802 includes: and the other terminals detect the release operation of the virtual scout machines owned by the other virtual objects controlled by the other terminals, send a release instruction to the server, and the server receives the release instruction and sends the release instruction to the terminal controlling the first virtual object. And the first virtual object and other virtual objects belong to the same camp.
803. And the terminal adds the position identification of each virtual object in the thumbnail map according to the position of each virtual object in the virtual scene.
In the embodiment of the present application, since each virtual object in the virtual scene may move in the virtual scene, and the position of each virtual object may change continuously, it is necessary to determine the position of each virtual object in the virtual scene, so as to add the position identifier of each virtual object in the thumbnail map, thereby obtaining a complete thumbnail map.
Wherein the position identifier is used for indicating the position of the virtual object. Alternatively, the position mark is a circle mark, a triangle mark, or the like. Alternatively, in the thumbnail map, the shapes of the position marks of the virtual objects belonging to the same camp are the same, and the shapes of the position marks of the virtual objects of different camps are different. For example, the position identifier of each virtual object in the camp to which the virtual object controlled by the current terminal belongs is a circle identifier, and the position identifier of each virtual object in the opposite camp is a triangle identifier.
Optionally, in the thumbnail map, the shape of the position identifier of each virtual object is the same, the color of the position identifier of the virtual object belonging to the same camp is the same, and the color of the position identifier of the virtual object belonging to different camps is different. For example, in the virtual scene, the position identifier of each virtual object is a circular identifier, the position identifier of each virtual object in the camp to which the virtual object currently controlled by the terminal belongs is a green identifier, and the position identifier of each virtual object in the camp is a red identifier. Optionally, the position identification of the virtual object currently controlled by the terminal is highlighted.
The method comprises the steps of determining the positions of virtual objects in a virtual scene, mapping the positions of the virtual objects to a thumbnail map, determining the positions of the virtual objects in the thumbnail map, and adding position identifications of the virtual objects at the corresponding positions so as to show the positions of the virtual objects in the thumbnail map.
In one possible implementation, the step 803 includes the following steps 8031 and 8032:
8031. and determining the target position of the second virtual object in the thumbnail map according to the position of any second virtual object in the virtual scene and the scaling between the virtual scene and the thumbnail map.
The second virtual object is any virtual object in the virtual scene, and the zoom scale represents a ratio between the size of the thumbnail map and the size of the virtual scene, for example, the zoom scale is 30% or 60%.
Because the thumbnail map is obtained by zooming the virtual scene according to the zooming scale, the position of any second virtual object of the virtual scene can be mapped into the thumbnail map through the position and the zooming scale of the second virtual object, and thus the target position corresponding to the position of the second virtual object in the thumbnail map is determined.
In one possible implementation manner, the virtual scene includes a plurality of first reference positions, the thumbnail map includes a plurality of second reference positions, and the plurality of first reference positions correspond to the plurality of second reference positions one to one; this step 8031 includes: determining a first distance between the position of a second virtual object in the virtual scene and each first reference position, and carrying out scaling processing on a plurality of first distances according to a scaling ratio to obtain a plurality of second distances; and positioning the target position according to the plurality of second reference positions and the corresponding second distances so as to enable the target position to be separated from each second reference position by the corresponding second distance.
Since the plurality of first reference positions in the virtual scene correspond to the plurality of second reference positions in the thumbnail map one by one, that is, one first reference position corresponds to one second reference position, after a first distance between a position of the second virtual object in the virtual scene and each first reference position is determined, a second distance between the second virtual object and each second reference position can be determined by scaling, and then a target position which is separated from each second reference position by a corresponding second distance can be located by the plurality of second reference positions and the second distance corresponding to each second reference position. As shown in fig. 10, the plurality of first reference positions in the virtual scene include a first reference position a, a first reference position B, and a first reference position C, as shown in fig. 11, the plurality of second reference positions in the thumbnail map include a second reference position 1, a second reference position 2, and a second reference position 3, the first reference position a corresponds to the second reference position 1, the first reference position B corresponds to the second reference position 2, and the first reference position C corresponds to the second reference position 3.
Optionally, a plurality of circles are determined by taking each second reference position as a center of a circle and a second distance corresponding to each second reference position as a radius, and a position corresponding to an intersection of the plurality of circles is taken as a target position.
And the target position is positioned through the plurality of circular position lines, so that the target position is separated from each second reference position by a corresponding second distance.
In one possible implementation manner, the virtual scene includes a plurality of first reference positions, the thumbnail map includes a plurality of second reference positions, and the plurality of first reference positions correspond to the plurality of second reference positions one to one; this step 8031 includes: determining a first distance and a first direction between the position of a second virtual object in the virtual scene and each first reference position, and scaling a plurality of first distances according to the scaling to obtain a plurality of second distances; and positioning the target position according to the plurality of second reference positions, the corresponding second distances and the corresponding first directions so as to enable the target position to be separated from each second reference position by the corresponding second distance.
And for the first reference position and the second reference position which correspond to each other, in the virtual scene, the direction from the position of the second virtual object in the virtual scene to the first reference position is the same as the direction from the position of the second virtual object in the thumbnail map to the second reference position.
8032. And adding the position identification of the second virtual object in the thumbnail map according to the target position.
The target position is the position of the second virtual object in the thumbnail map, and therefore, the position identifier of the second virtual object is added through the target position of the thumbnail map, so as to represent the position of the second virtual object.
804. And the terminal displays a thumbnail map of the virtual scene in the virtual scene interface, wherein the thumbnail map comprises position identifications of all virtual objects in the virtual scene.
The method comprises the steps of displaying a thumbnail map of a virtual scene in a virtual scene interface so that a user can conveniently view the thumbnail map to determine the position of each virtual object, and then controlling the virtual objects to move according to the positions of the virtual objects. As shown in fig. 12, a thumbnail map showing each virtual building and virtual road in the virtual scene in a top view and displaying the position of each virtual object in the virtual scene is displayed in the virtual scene interface.
In one possible implementation, after step 804, the method further includes: and receiving an interference instruction for the opposite battle, and canceling the thumbnail map in the virtual scene interface. The interference instruction is used for interfering a virtual scout plane for the opposite party to camp, so that a corresponding virtual scene interface for the opposite party to camp cannot display the thumbnail map.
Optionally, the other terminals detect a release operation of the virtual radar, send an interference instruction to the server, the server sends the interference instruction to the terminal, and the terminal receives the interference instruction and cancels display of the thumbnail map in the virtual scene interface.
And the virtual objects controlled by other terminals belong to different camps from the first virtual object controlled by the terminal. The virtual radar is used for interfering a virtual scout plane for the opposite party to camp.
Since in a real scene, the scout can scout the map condition of any area, for example, buildings, people, roads, etc. in the area, and the scout can be interfered by signals emitted by the radar, the scout cannot view the map condition of the area. Therefore, by adding the virtual scout and the virtual radar in the virtual scene, the virtual radar can interfere with the virtual scout for the opposite party to camp, so that a thumbnail map is not displayed in a virtual scene interface corresponding to the opposite party to camp, the effect of interfering with an aircraft by a virtual scout Counter UAV (Unmanned Aerial Vehicle) is realized, and the reality of the virtual scene is enhanced.
It should be noted that, in the embodiment of the present application, after receiving the release instruction, the position identifier of each virtual object is added to the thumbnail map, and then the thumbnail map is displayed in the virtual scene interface, but in another embodiment, the step 803 does not need to be executed, and after receiving the release instruction, the thumbnail map is directly displayed in the virtual scene interface.
According to the method provided by the embodiment of the application, when a person is in any area, the person cannot know the real scene outside the visual field range of the person, and if the person can know the real scene outside the visual field range of the person by looking at the map of the area, the thumbnail map of the virtual scene is not displayed when the virtual scene interface is displayed, so that a user cannot know the information of other parts of the virtual scene except the part of the virtual scene displayed by the virtual scene interface, and the thumbnail map is displayed in the virtual scene interface under the condition that the virtual scout machine of the local virtual object belonging to the virtual scene is released, so that the positions of all the virtual objects in the virtual scene are prompted, the reality of the virtual scene is enhanced, and the interestingness of the virtual scene is increased.
And the position of each virtual object in the thumbnail map is determined through the scaling between the virtual scene and the thumbnail map and the corresponding reference positions in the virtual scene and the thumbnail map, so that the position of the virtual object in the virtual scene is ensured to correspond to the position pointed by the position mark in the thumbnail map, and the accuracy of the position mark of each object in the thumbnail map is improved.
Based on the embodiment of the virtual control method, in order to enhance the reality of the virtual scene, the virtual objects belonging to the same camp can be attacked, and when the scene interface is displayed, the thumbnail map of the virtual scene is not displayed, and only after the virtual scout of any camp is released, the thumbnail map of the virtual scene is displayed in the virtual scene interface corresponding to the camp.
Taking a game scene as an example, the embodiments shown in fig. 3 and fig. 8 are exemplarily described, as shown in fig. 13:
1. the terminal starts a game application, displays a mode selection interface, and responds to the selection operation of a just-check mode in the mode selection interface to display a virtual scene interface.
2. The terminal detects the releasing operation of a virtual scout machine owned by a first virtual object controlled by the terminal, and displays the thumbnail map of the virtual scene in the virtual scene interface.
3. The terminal detects the attack operation of the first virtual object to the second virtual object, and determines the first times of the first virtual object and the second times of the second virtual object.
Wherein the first virtual object and the second virtual object belong to the same camp.
4. If the first frequency or the second frequency is not less than the reference frequency, the terminal controls the life value of the first virtual object to be reduced; and if the first times and the second times are both smaller than the reference times, the terminal controls the life value of the second virtual object to be reduced.
Fig. 14 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present application, and as shown in fig. 14, the apparatus includes:
a display module 1401, configured to display a first virtual object and a second virtual object, where the first virtual object and the second virtual object belong to the same camp;
a receiving module 1402, configured to receive an attack instruction of a first virtual object on a second virtual object;
a first control module 1403, configured to control the life value of the first virtual object to decrease if the first virtual object kills the own virtual object a first number of times or the second virtual object is killed by the own virtual object a second number of times that is not less than the reference number of times.
In one possible implementation, as shown in fig. 15, the apparatus further includes:
a second control module 1404 configured to control the life value of the second virtual object to decrease if both the first number and the second number are less than the reference number.
In another possible implementation, as shown in fig. 15, the first control module 1403 includes:
a control unit 1431, configured to control the life value of the first virtual object to decrease by the reference value if the first number or the second number is not less than the reference number.
In another possible implementation, as shown in fig. 15, the first control module 1403 includes:
a control unit 1431, configured to control the life value of the first virtual object to reduce the reference scale if the first number or the second number is not less than the reference number.
In another possible implementation, as shown in fig. 15, the receiving module 1402 includes:
a detection unit 1421, configured to detect a shooting operation of the first virtual object on the second virtual object using the virtual firearm; alternatively, the first and second electrodes may be,
the detecting unit 1421 is further configured to detect a slash operation of the first virtual object on the second virtual object using the virtual tool.
In another possible implementation, as shown in fig. 15, the receiving module 1402 includes:
the detecting unit 1421 is configured to detect a trigger operation on an attack option when the second virtual object is selected.
Fig. 16 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present application, and as shown in fig. 16, the apparatus includes:
a display module 1601, configured to display a virtual scene interface, where the virtual scene interface includes a first virtual object and does not include a thumbnail map;
a receiving module 1602, configured to receive a release instruction for a virtual scout in which a first virtual object belongs;
the display module 1601 is configured to display a thumbnail map of the virtual scene in the virtual scene interface, where the thumbnail map includes location identifiers of respective virtual objects in the virtual scene.
In one possible implementation, as shown in fig. 17, the receiving module 1602 includes:
a detecting unit 1621, configured to detect a release operation of a virtual scout owned by the first virtual object.
In another possible implementation, as shown in fig. 17, the receiving module 1602 includes:
the receiving unit 1622 is configured to receive a release instruction sent by the server, where the release instruction is sent to the server after the other device detects a release operation on the virtual scout, and the other virtual object and the first virtual object belong to the same camp.
In another possible implementation, as shown in fig. 17, the apparatus further includes:
an adding module 1603, configured to add, according to the position where each virtual object in the virtual scene is located, a position identifier of each virtual object in the thumbnail map.
In another possible implementation, as shown in fig. 17, add module 1603, including:
a determining unit 1631, configured to determine a target position of the second virtual object in the thumbnail map according to a position of any second virtual object in the virtual scene and a zoom ratio between the virtual scene and the thumbnail map;
an adding unit 1632, configured to add a location identifier of the second virtual object in the thumbnail map according to the target location.
In another possible implementation manner, the virtual scene includes a plurality of first reference positions, the thumbnail map includes a plurality of second reference positions, and the plurality of first reference positions correspond to the plurality of second reference positions one to one;
as shown in fig. 17, a determining unit 1631, configured to determine a first distance between a position where the second virtual object is located in the virtual scene and each first reference position; according to the scaling, scaling the plurality of first distances to obtain a plurality of second distances; and positioning the target position according to the plurality of second reference positions and the corresponding second distances so as to enable the target position to be separated from each second reference position by the corresponding second distance.
Fig. 18 shows a block diagram of an electronic device 1800 according to an exemplary embodiment of the present application. The electronic device 1800 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 1800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
The electronic device 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1802 is used to store at least one program code for execution by the processor 1801 to implement the man-machine conversation based method of call pickup provided by the method embodiments of the present application.
In some embodiments, the electronic device 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, display 1805, camera assembly 1806, audio circuitry 1807, positioning assembly 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802. In some embodiments, the processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1801, the memory 1802, and the peripheral device interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 1804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1804 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, disposed on a front panel of the electronic device 1800; in other embodiments, the number of the display screens 1805 may be at least two, and each of the display screens is disposed on a different surface of the electronic device 1800 or is in a foldable design; in other embodiments, the display 1805 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 1800. Even more, the display 1805 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 1805 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The microphones may be multiple and disposed at different locations of the electronic device 1800 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1801 or the radio frequency circuitry 1804 to sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The positioning component 1808 is operable to locate a current geographic Location of the electronic device 1800 to implement a navigation or LBS (Location Based Service). The Positioning component 1808 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in russia.
The power supply 1809 is used to power various components within the electronic device 1800. The power supply 1809 may be ac, dc, disposable or rechargeable. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the electronic device 1800. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1801 may control the display 1805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the electronic device 1800, and the gyro sensor 1812 may cooperate with the acceleration sensor 1811 to collect a 3D motion of the user on the electronic device 1800. The processor 1801 may implement the following functions according to the data collected by the gyro sensor 1812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1813 may be disposed on a side bezel of the electronic device 1800 and/or on a lower layer of the display 1805. When the pressure sensor 1813 is disposed on a side frame of the electronic device 1800, a user's holding signal of the electronic device 1800 can be detected, and the processor 1801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the display screen 1805, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1814 is used to collect the fingerprint of the user, and the processor 1801 identifies the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1814 may be disposed on the front, back, or side of the electronic device 1800. When a physical key or vendor Logo is provided on the electronic device 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the display screen 1805 based on the ambient light intensity collected by the optical sensor 1815. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1805 is increased; when the ambient light intensity is low, the display brightness of the display 1805 is reduced. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 according to the intensity of the ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also known as a distance sensor, is disposed on the front panel of the electronic device 1800. The proximity sensor 1816 is used to gather the distance between the user and the front of the electronic device 1800. In one embodiment, the processor 1801 controls the display 1805 to switch from the bright screen state to the dark screen state when the proximity sensor 1816 detects that the distance between the user and the front surface of the electronic device 1800 gradually decreases; when the proximity sensor 1816 detects that the distance between the user and the front surface of the electronic device 1800 is gradually increased, the processor 1801 controls the display 1805 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 18 is not intended to be limiting of the electronic device 1800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 19 is a schematic structural diagram of a server 1900 according to an embodiment of the present application, where the server 1900 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1901 and one or more memories 1902, where the memory 1902 stores at least one program code, and the at least one program code is loaded and executed by the processors 1901 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
Server 1900 may be configured to perform the steps performed by the server in the virtual object control method described above.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, and the memory stores at least one program code, and the at least one program code is loaded and executed by the processor to implement the operations executed in the virtual object control method of the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the operations performed in the virtual object control method of the foregoing embodiment.
Embodiments of the present application also provide a computer program product or a computer program comprising computer program code stored in a computer readable storage medium. The processor of the computer apparatus reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer apparatus implements the operations performed in the virtual object control method as in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and should not be construed as limiting the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (25)

1. A virtual object control method, characterized in that the method comprises:
displaying a first virtual object and a second virtual object, wherein the first virtual object and the second virtual object belong to the same camp;
receiving an attack instruction of the first virtual object to the second virtual object;
if the first number of times that the first virtual object kills the own virtual object or the second number of times that the second virtual object is killed by the own virtual object is not less than the reference number of times, controlling the life value of the first virtual object to continuously reduce, reducing the moving speed of the first virtual object and not controlling the life value of the second virtual object to reduce;
and if the first times and the second times are both smaller than the reference times, controlling the life value of the second virtual object to continuously decrease, and reducing the moving speed of the second virtual object.
2. The method of claim 1, wherein the controlling the life value of the first virtual object to decrease if the first virtual object kills the own virtual object for a first number of times or the second virtual object is killed by the own virtual object for a second number of times that is not less than a reference number of times comprises:
and if the first time or the second time is not less than the reference time, controlling the life value of the first virtual object to be reduced by the reference value.
3. The method of claim 1, wherein the controlling the life value of the first virtual object to decrease if the first virtual object kills the own virtual object for a first number of times or the second virtual object is killed by the own virtual object for a second number of times that is not less than a reference number of times comprises:
and if the first time or the second time is not less than the reference time, controlling the life value of the first virtual object to reduce the reference scale.
4. The method of claim 1, wherein receiving an attack instruction from the first virtual object on the second virtual object comprises:
detecting a shooting operation of the first virtual object on the second virtual object by using a virtual firearm; alternatively, the first and second electrodes may be,
detecting a chopping operation of the first virtual object on the second virtual object by using a virtual tool.
5. The method of claim 1, wherein receiving an attack instruction from the first virtual object on the second virtual object comprises:
and detecting a trigger operation on an attack option under the condition that the second virtual object is selected.
6. A virtual object control method, characterized in that the method comprises:
displaying a virtual scene interface, the virtual scene interface including a first virtual object and not including a thumbnail map;
receiving a release instruction of a virtual scout machine which the first virtual object belongs to;
displaying a thumbnail map of the virtual scene in the virtual scene interface, wherein the thumbnail map comprises position identifications of all virtual objects in the virtual scene;
and receiving an interference instruction for the opposite camp, and canceling the thumbnail map in the virtual scene interface, wherein the interference instruction is used for interfering the virtual scout plane of the camp to which the first virtual object belongs.
7. The method of claim 6, wherein receiving a release instruction for a virtual scout in an event to which the first virtual object belongs comprises:
a release operation is detected for a virtual scout owned by the first virtual object.
8. The method of claim 6, wherein receiving a release instruction for a virtual scout in an event to which the first virtual object belongs comprises:
and receiving the release instruction sent by the server, wherein the release instruction is sent to the server after other equipment detects the release operation on the virtual scout, and the other virtual objects and the first virtual object belong to the same camp.
9. The method of claim 6, wherein prior to displaying the thumbnail map of the virtual scene in the virtual scene interface, the method further comprises:
and adding the position identification of each virtual object in the thumbnail map according to the position of each virtual object in the virtual scene.
10. The method according to claim 9, wherein the adding, to the thumbnail map, the location identifier of each virtual object according to the location where the virtual object is located in the virtual scene includes:
determining the target position of any second virtual object in the virtual scene in the thumbnail map according to the position of the second virtual object in the virtual scene and the scaling between the virtual scene and the thumbnail map;
and adding the position identification of the second virtual object in the thumbnail map according to the target position.
11. The method of claim 10, wherein the virtual scene comprises a plurality of first reference positions, the thumbnail map comprises a plurality of second reference positions, and the plurality of first reference positions are in one-to-one correspondence with the plurality of second reference positions;
the determining the target position of the second virtual object in the thumbnail map according to the position of any second virtual object in the virtual scene and the zoom scale between the virtual scene and the thumbnail map comprises:
determining a first distance between a position in the virtual scene where the second virtual object is located and each first reference position;
according to the scaling, scaling the plurality of first distances to obtain a plurality of second distances;
and positioning the target position according to the plurality of second reference positions and the corresponding second distances so as to enable the target position to be separated from each second reference position by the corresponding second distance.
12. An apparatus for controlling a virtual object, the apparatus comprising:
the display module is used for displaying a first virtual object and a second virtual object, and the first virtual object and the second virtual object belong to the same camp;
the receiving module is used for receiving an attack instruction of the first virtual object to the second virtual object;
a first control module, configured to control a life value of the first virtual object to continuously decrease, decrease a moving speed of the first virtual object, and not control a life value of the second virtual object to decrease if a first number of times that the first virtual object kills the own virtual object or a second number of times that the second virtual object is killed by the own virtual object is not less than a reference number of times;
and the second control module is used for controlling the life value of the second virtual object to continuously reduce and reducing the moving speed of the second virtual object if the first frequency and the second frequency are both smaller than the reference frequency.
13. The apparatus of claim 12, wherein the first control module comprises:
a control unit, configured to control the life value of the first virtual object to decrease by a reference value if the first number or the second number is not less than the reference number.
14. The apparatus of claim 12, wherein the first control module comprises:
a control unit, configured to control the life value of the first virtual object to reduce the reference scale if the first number or the second number is not less than the reference number.
15. The apparatus of claim 12, wherein the receiving module comprises:
a detection unit configured to detect a shooting operation of the first virtual object on the second virtual object using a virtual firearm; alternatively, the first and second electrodes may be,
the detection unit is further used for detecting the chopping operation of the first virtual object on the second virtual object by using a virtual tool.
16. The apparatus of claim 12, wherein the receiving module comprises:
and the detection unit is used for detecting the trigger operation on the attack option under the condition that the second virtual object is selected.
17. An apparatus for controlling a virtual object, the apparatus comprising:
a display module to display a virtual scene interface, the virtual scene interface including a first virtual object and not including a thumbnail map;
the receiving module is used for receiving a release instruction of a virtual scout plane which is strutted by the first virtual object;
the display module is used for displaying a thumbnail map of the virtual scene in the virtual scene interface, wherein the thumbnail map comprises position identifiers of all virtual objects in the virtual scene;
and receiving an interference instruction for the opposite camp, and canceling the thumbnail map in the virtual scene interface, wherein the interference instruction is used for interfering the virtual scout plane of the camp to which the first virtual object belongs.
18. The apparatus of claim 17, wherein the receiving module comprises:
a detection unit configured to detect a release operation for a virtual scout owned by the first virtual object.
19. The apparatus of claim 18, wherein the receiving module comprises:
the receiving unit is used for receiving the release instruction sent by the server, the release instruction is sent to the server after other equipment detects the release operation on the virtual scout, and the other virtual objects and the first virtual object belong to the same camp.
20. The apparatus of claim 18, further comprising:
and the adding module is used for adding the position identification of each virtual object in the thumbnail map according to the position of each virtual object in the virtual scene.
21. The apparatus of claim 20, wherein the adding module comprises:
a determining unit, configured to determine, according to a position of any one second virtual object in the virtual scene and a scaling between the virtual scene and the thumbnail map, a target position of the second virtual object in the thumbnail map;
and the adding unit is used for adding the position identifier of the second virtual object in the thumbnail map according to the target position.
22. The apparatus according to claim 21, wherein the virtual scene comprises a plurality of first reference positions, the thumbnail map comprises a plurality of second reference positions, and the plurality of first reference positions are in one-to-one correspondence with the plurality of second reference positions;
a determining unit, configured to determine a first distance between a position of the second virtual object in the virtual scene and each first reference position;
according to the scaling, scaling the plurality of first distances to obtain a plurality of second distances;
and positioning the target position according to the plurality of second reference positions and the corresponding second distances so as to enable the target position to be separated from each second reference position by the corresponding second distance.
23. A computer device comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded into and executed by the processor to carry out operations carried out in the virtual object control method according to any one of claims 1 to 5 or to carry out operations carried out in the virtual object control method according to any one of claims 6 to 11.
24. A computer-readable storage medium having stored therein at least one program code, which is loaded and executed by a processor, to implement the operations performed in the virtual object control method according to any one of claims 1 to 5, or to implement the operations performed in the virtual object control method according to any one of claims 6 to 11.
25. A computer program product or a computer program, characterized in that the computer program product or the computer program comprises computer program code, which is stored in a computer-readable storage medium, wherein a processor of a computer device reads the computer program code from the computer-readable storage medium, wherein the processor executes the computer program code, such that the computer device implements the operations performed in the virtual object control method according to any one of claims 1 to 5 or claims 6 to 11.
CN202010953355.8A 2020-09-11 2020-09-11 Virtual object control method and device, computer equipment and storage medium Active CN112057861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010953355.8A CN112057861B (en) 2020-09-11 2020-09-11 Virtual object control method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010953355.8A CN112057861B (en) 2020-09-11 2020-09-11 Virtual object control method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112057861A CN112057861A (en) 2020-12-11
CN112057861B true CN112057861B (en) 2022-04-26

Family

ID=73696193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010953355.8A Active CN112057861B (en) 2020-09-11 2020-09-11 Virtual object control method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112057861B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108310772A (en) * 2018-01-22 2018-07-24 腾讯科技(深圳)有限公司 The execution method and apparatus and storage medium of attack operation, electronic device
CN111035918A (en) * 2019-11-20 2020-04-21 腾讯科技(深圳)有限公司 Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111111165A (en) * 2019-12-05 2020-05-08 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108310772A (en) * 2018-01-22 2018-07-24 腾讯科技(深圳)有限公司 The execution method and apparatus and storage medium of attack operation, electronic device
CN111035918A (en) * 2019-11-20 2020-04-21 腾讯科技(深圳)有限公司 Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111111165A (en) * 2019-12-05 2020-05-08 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刺激战场伤害队友惩罚系统功能及使用方法介绍,http://m.52miji.com/xzpubgm/gl/22019.html;无;《网页》;20181106;第1-2页 *
无.刺激战场伤害队友惩罚系统功能及使用方法介绍,http://m.52miji.com/xzpubgm/gl/22019.html.《网页》.2018, *
无人侦察机 透视 《使命召唤手游》无人机详解,https://www.bilibili.com/video/BV1ft411W7gq?from=search&seid=14670037377026672244&spm_id_from=333.337.0.0;炮艇船长;《哔哩哔哩视频》;20190107;整个视频 *
绝地求生刺激战场怎么击杀队友 恶意伤害队友攻略,http://www.87g.com/pg/82878.html;李刚;《网页》;20180715;第1-2页 *

Also Published As

Publication number Publication date
CN112057861A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN111589127B (en) Control method, device and equipment of virtual role and storage medium
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN111589146A (en) Prop operation method, device, equipment and storage medium based on virtual environment
CN109634413B (en) Method, device and storage medium for observing virtual environment
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112717396B (en) Interaction method, device, terminal and storage medium based on virtual pet
CN113577765B (en) User interface display method, device, equipment and storage medium
CN112604305A (en) Virtual object control method, device, terminal and storage medium
CN113041620B (en) Method, device, equipment and storage medium for displaying position mark
CN111760278A (en) Skill control display method, device, equipment and medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112691370A (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium
CN113398572A (en) Virtual item switching method, skill switching method and virtual object switching method
CN111760281A (en) Method and device for playing cut-scene animation, computer equipment and storage medium
CN112402962A (en) Signal display method, device, equipment and medium based on virtual environment
CN113198178B (en) Virtual object position prompting method, device, terminal and storage medium
CN112121438B (en) Operation prompting method, device, terminal and storage medium
CN112044070B (en) Virtual unit display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant