CN111589144A - Control method, device, equipment and medium of virtual role - Google Patents

Control method, device, equipment and medium of virtual role Download PDF

Info

Publication number
CN111589144A
CN111589144A CN202010589764.4A CN202010589764A CN111589144A CN 111589144 A CN111589144 A CN 111589144A CN 202010589764 A CN202010589764 A CN 202010589764A CN 111589144 A CN111589144 A CN 111589144A
Authority
CN
China
Prior art keywords
stunning
virtual
virtual character
skill
avatar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010589764.4A
Other languages
Chinese (zh)
Other versions
CN111589144B (en
Inventor
姚丽
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010589764.4A priority Critical patent/CN111589144B/en
Publication of CN111589144A publication Critical patent/CN111589144A/en
Application granted granted Critical
Publication of CN111589144B publication Critical patent/CN111589144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a control method, a control device, control equipment and a control medium of a virtual role, and relates to the field of virtual environments. The method comprises the following steps: displaying a virtual environment screen, the virtual environment screen comprising: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills; controlling the first virtual character to use the stunning skill in response to a stunning number of the first virtual character satisfying a number threshold, the stunning number being a number by which the first virtual character stunts the second virtual character; controlling the second avatar within the scope of the stunning skill to be stunned, the scope including a region range determined in the virtual environment according to the position of the first avatar. The method can simplify the operation of controlling the virtual character to get rid of difficulties by the user and improve the human-computer interaction efficiency.

Description

Control method, device, equipment and medium of virtual role
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a method, a device, equipment and a medium for controlling a virtual role.
Background
In an application program based on a three-dimensional virtual environment, such as a first-person shooting game, a user can control virtual characters in the virtual environment to perform actions such as walking, running, climbing, shooting, fighting and the like.
In a zombie-mode first-person shooting game, a zombie controlled by a computer attacks a virtual character controlled by a client, and the virtual character needs to kill the zombie in a virtual environment to ensure that the zombie survives and can win victory. When a large number of zombies surround the virtual character, the user can control the virtual character to kill or repel back around zombies by triggering the shooting control, so that the virtual character is trapped.
When the corpses around the virtual character are too many, the user needs to continuously aim and continuously click the shooting control to kill or retreat the corpses, so that the virtual character can be overcome, the user operation is too complicated, and the human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, control equipment and a control medium for a virtual character, and when the virtual character is surrounded by a zombie, the operation of controlling the virtual character to get rid of poverty by a user can be simplified, and the human-computer interaction efficiency is improved. The technical scheme is as follows:
in one aspect, a method for controlling a virtual character is provided, where the method includes:
displaying a virtual environment screen, the virtual environment screen comprising: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills;
controlling the first virtual character to use the stunning skill in response to a stunning number of the first virtual character satisfying a number threshold, the stunning number being a number by which the first virtual character stunts the second virtual character;
controlling the second avatar within the scope of the stunning skill to be stunned, the scope including a region range determined in the virtual environment according to the position of the first avatar.
In another aspect, there is provided an apparatus for controlling a virtual character, the apparatus including:
a display module, configured to display a virtual environment screen, where the virtual environment screen includes: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills;
a control module to control the first virtual character to use the stunning skill in response to a stunning number of the first virtual character satisfying a number threshold, the stunning number being a number of the first virtual character stunning the second virtual character;
the control module is further configured to control the second virtual character located within an action range of the stunning skill to be stunned, where the action range includes an area range determined in the virtual environment according to the position of the first virtual character.
In another aspect, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the control method of a virtual character as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the control method of a virtual character as described above.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the control method of the virtual machine role provided in the above-mentioned optional implementation mode.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through setting the stunning skill for the first virtual character, after the first virtual character knocks down a certain number of zombies (second virtual character), the first virtual character is automatically controlled to use the stunning skill to stun zombies near the first virtual character, so that the first virtual character can limit the activities of surrounding zombies, and then the zombies are quickly separated from the enclosure of the zombies when being stunned and can not move. The stunning skill is a skill (passive skill) automatically used when the skill use condition is met, the skill can be triggered without extra operation by a user, zombies nearby the stunning are stunned, the user operation is simplified, and the man-machine interaction efficiency of the user for controlling the first virtual character to get rid of the stunning operation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application;
FIG. 4 is a schematic view of a camera model corresponding to a perspective of a virtual object provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic user interface diagram of a method for controlling a virtual character according to an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 7 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 8 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 9 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 10 is a schematic diagram of a collision detection model of a control method of a virtual character according to another exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 12 is a schematic diagram illustrating the scope of the virtual character control method according to another exemplary embodiment of the present application;
FIG. 13 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 14 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 15 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 16 is a block diagram of a control apparatus of a virtual character according to another exemplary embodiment of the present application;
fig. 17 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional world, or a purely fictional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual roles: refers to a movable object in a virtual environment. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
First Person shooter game (FPS): the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a first virtual character perspective. In the game, at least two virtual characters carry out a single-game fighting mode in a virtual environment, the virtual characters achieve the purpose of survival in the virtual environment by avoiding attacks launched by other virtual characters or/and dangers (such as poison circle, marshland, bomb and the like) existing in the virtual environment, when the life value of the virtual characters in the virtual environment is zero, the life of the virtual characters in the virtual environment is ended, and the virtual characters which finally survive in the virtual environment are winners. Optionally, each client may control one or more virtual characters in the virtual environment, with the time when the first client joins the battle as a start time and the time when the last client exits the battle as an end time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
User interface UI (user interface) controls, any visual control or element visible on the user interface of the application, such as controls for pictures, input boxes, text boxes, buttons, tabs, etc., some of which are responsive to user operations, such as a shoot control, to control a virtual character to shoot in a virtual environment. The UI control referred to in the embodiments of the present application includes, but is not limited to: and (5) shooting controls.
The method provided by the application can be applied to the application program with the virtual environment and the virtual role. Illustratively, an application that supports a virtual environment is one in which a user can control the movement of a virtual character within the virtual environment. By way of example, the methods provided herein may be applied to: any one of a Virtual Reality (VR) application program, an Augmented Reality (AR) program, a three-dimensional map program, a military Simulation program, a Virtual Reality Game, an Augmented Reality Game, a First-Person shooter Game (FPS), a Third-Person shooter Game (TPS), a Multiplayer online tactical Game (MOBA), and a strategic Game (SLG).
Illustratively, a game in the virtual environment is composed of one or more maps of game worlds, the virtual environment in the game simulates a scene of a real world, a user can control a virtual character in the game to perform actions such as walking, running, jumping, shooting, fighting, driving, attacking other virtual characters by using virtual weapons, and the like in the virtual environment, the interactivity is strong, and a plurality of users can form a team on line to perform a competitive game.
In some embodiments, the application may be a shooting game, a racing game, a role playing game, an adventure game, a sandbox game, a tactical competition game, a military simulation program, or the like. The client can support at least one operating system of a Windows operating system, an apple operating system, an android operating system, an IOS operating system and a LINUX operating system, and the clients of different operating systems can be interconnected and intercommunicated. In some embodiments, the client is a program adapted to a mobile terminal having a touch screen.
In some embodiments, the client is an application developed based on a three-dimensional engine, such as the three-dimensional engine being a Unity engine.
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer iv, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with a client supporting a virtual environment, such as a client of an application supporting a three-dimensional virtual environment. The application program may be any one of a Battle Royal (BR) game, a virtual reality application program, an augmented reality program, a three-dimensional map program, a military simulation program, a third person shooter game, a first person shooter game, and a multiplayer online tactic competition game. Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal includes a processor 101, a touch screen 102, and a memory 103.
The processor 101 may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having instruction execution capabilities.
The touch screen 102 includes a general touch screen or a pressure sensitive touch screen. The normal touch screen can measure a pressing operation or a sliding operation applied to the touch screen 102; a pressure sensitive touch screen can measure the degree of pressure exerted on the touch screen 102.
The memory 103 stores an executable program of the processor 101. Illustratively, the memory 103 stores a virtual environment program a, an application program B, an application program C, a touch pressure sensing module 18, and a kernel layer 19 of an operating system. The virtual environment program a is an application program developed based on the three-dimensional virtual environment module 17. Optionally, the virtual environment program a includes, but is not limited to, at least one of a game program, a virtual reality program, a three-dimensional map program, and a three-dimensional presentation program developed by a three-dimensional virtual environment module (also referred to as a virtual environment module) 17. For example, when the operating system of the terminal adopts an android operating system, the virtual environment program a is developed by adopting Java programming language and C # language; for another example, when the operating system of the terminal is the IOS operating system, the virtual environment program a is developed using the Object-C programming language and the C # language.
The three-dimensional Virtual environment module 17 is a module supporting multiple operating system platforms, and schematically, the three-dimensional Virtual environment module may be used for program development in multiple fields, such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field, and the specific type of the three-dimensional Virtual environment module 17 is not limited in the embodiment of the present application, and in the following embodiment, the three-dimensional Virtual environment module 17 is a module developed by using a Unity engine as an example.
The touch (and pressure) sensing module 18 is a module for receiving a touch event (and a pressure touch event) reported by the touch screen driver 191, and optionally, the touch sensing module may not have a pressure sensing function and does not receive a pressure touch event. The touch event includes: the type of touch event and the coordinate values, the type of touch event including but not limited to: a touch start event, a touch move event, and a touch down event. The pressure touch event comprises the following steps: a pressure value and a coordinate value of the pressure touch event. The coordinate value is used for indicating a touch position of the pressure touch operation on the display screen. Optionally, an abscissa axis is established in the horizontal direction of the display screen, and an ordinate axis is established in the vertical direction of the display screen to obtain a two-dimensional coordinate system.
Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transmitted to the pressure sensing module 18.
Other drivers 192 may be drivers associated with the processor 101, drivers associated with the memory 103, drivers associated with network components, drivers associated with sound components, and the like.
Those skilled in the art will appreciate that the foregoing is merely a general illustration of the structure of the terminal. A terminal may have more or fewer components in different embodiments. For example, the terminal may further include a gravitational acceleration sensor, a gyro sensor, a power supply, and the like.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: terminal 210, server cluster 220.
The terminal 210 is installed and operated with a client 211 supporting a virtual environment, and the client 211 may be an application supporting a virtual environment. When the terminal runs the client 211, a user interface of the client 211 is displayed on the screen of the terminal 210. The client can be any one of an FPS game, a TPS game, a military simulation program, an MOBA game, a tactical competitive game and an SLG game. In the present embodiment, the client is an FPS game for example. The terminal 210 is a terminal used by the first user 212, and the first user 212 uses the terminal 210 to control a first virtual character located in the virtual environment to perform an activity, and the first virtual character may be referred to as a first virtual character of the first user 212. The activities of the first avatar include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first avatar, such as a simulated persona or an animated persona.
The device types of the terminal 210 include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only one terminal is shown in fig. 2, but there are a plurality of other terminals 240 in different embodiments. In some embodiments, there is at least one other terminal 240 corresponding to the developer, a development and editing platform of the client of the virtual environment is installed on the other terminal 240, the developer can edit and update the client on the other terminal 240, and transmit the updated client installation package to the server cluster 220 through a wired or wireless network, and the terminal 210 can download the client installation package from the server cluster 220 to update the client.
The terminal 210 and the other terminals 240 are connected to the server cluster 220 through a wireless network or a wired network.
The server cluster 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server cluster 220 is used to provide background services for clients that support a three-dimensional virtual environment. Optionally, the server cluster 220 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 220 undertakes the secondary computing work, and the terminal undertakes the primary computing work; or, the server cluster 220 and the terminal perform cooperative computing by using a distributed computing architecture.
Optionally, the terminal and the server are both computer devices.
In one illustrative example, server cluster 220 includes servers 221 and 226, where servers 221 include a processor 222, a user account database 223, a combat service module 224, and a user-oriented Input/Output Interface (I/O Interface) 225. The processor 222 is configured to load an instruction stored in the server 221, and process data in the user account database 221 and the combat service module 224; the user account database 221 is used for storing data of user accounts used by the terminal 210 and the other terminals 240, such as head images of the user accounts, nicknames of the user accounts, fighting capacity indexes of the user accounts, and service areas where the user accounts are located; the fight service module 224 is used for providing a plurality of fight rooms for the users to fight against; the user-facing I/O interface 225 is used to establish communication with the terminal 210 through a wireless network or a wired network to exchange data.
With reference to the above description of the virtual environment and the description of the implementation environment, a control method of a virtual character provided in the embodiment of the present application is described, and an execution subject of the method is illustrated as a client running on a terminal shown in fig. 1. The terminal runs an application program, which is a program supporting a virtual environment.
An exemplary embodiment of a control method of a virtual character applied to a zombie mode of an FPS game is provided.
In zombie mode, the virtual character may obtain gold coins for killing the zombie, and the virtual character may use the gold coins to purchase skills, e.g., to purchase "stun" skills, at a skill vending machine (also known as a "water fountain"). Stunning skills are passive skills that are automatically used when a virtual character meets the skill use criteria.
Illustratively, stunning techniques are used on a random basis after a designated number of zombies have been killed by the virtual character. For example, when a virtual character kills 50 zombies, the randomly controlled virtual character automatically uses skill stunning to stun zombies located near the virtual character for a period of time, and after a period of time, automatically controls the zombies to return to normal. For example, the probability of randomness is 50%, i.e., the virtual character will have a 50% probability of using stunning skills when killing 50 zombies. Stunning refers to the inability of a zombie to move. Illustratively, when a zombie is stunned, a stunning effect is displayed on the head of the zombie to inform the user that the zombie has been stunned.
For example, the virtual character in the present embodiment refers to a virtual character controlled by a client (user), and the zombie refers to a virtual character controlled by a server/computer/artificial intelligence/automatic algorithm. Illustratively, in zombie mode, a zombie is set to automatically search for nearby virtual characters and attack the virtual characters. The virtual roles need to kill the zombies, and the survival of the virtual roles is guaranteed.
Illustratively, the skill vending machine is randomly arranged at any position in the virtual environment, or the skill vending machine is fixedly arranged at a fixed position in the virtual environment. The virtual character may trigger a skill purchasing interface by approaching a skill vending machine to purchase skills. Illustratively, in a minimap of the virtual environment, the location of the skill vending machine is noted, and the user may control the virtual character to approach the skill vending machine by viewing the minimap.
Illustratively, a collision detection model is arranged outside a model of the skill vending machine and used for detecting that a virtual character approaches the skill vending machine, when the three-dimensional virtual model of the virtual character collides with the collision detection model, the collision detection model generates collision information, and the client determines that the virtual character collides with the collision detection model according to the collision information, and then displays a skill purchasing interface or pops up a skill purchasing button.
Fig. 3 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment, and the method includes at least the following steps.
Step 301, displaying a virtual environment screen, where the virtual environment screen includes: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills.
Illustratively, upon commencement of the battle, the client displays a user interface for the battle, the user interface including a virtual environment screen and a UI control positioned above the virtual environment screen. Exemplary, the user interface of the battle may further include: a team forming interface for friend team forming, a matching interface for matching the virtual role with other virtual roles, a game loading interface for loading game information of the game, and the like.
Illustratively, the first virtual role in the present embodiment is a virtual role controlled by the client, that is, the first virtual role is a master virtual role controlled by the client. Illustratively, the second avatar in this embodiment is an avatar controlled by a server/artificial intelligence/algorithm. Illustratively, the second avatar is set to automatically attack the avatar controlled by the client (the first avatar). Illustratively, the second virtual character is at least one of a zombie, a mourning, a monster, an animal, a BOSS, disposed in the virtual environment.
Illustratively, the virtual environment view is a view captured from viewing the virtual environment from the perspective of the first avatar.
The perspective refers to an observation angle when the virtual character is observed in the virtual environment from a first person perspective or a third person perspective. Optionally, in an embodiment of the present application, the viewing angle is an angle when the virtual character is observed by the camera model in the virtual environment.
Optionally, the camera model automatically follows the virtual character in the virtual environment, that is, when the position of the virtual character in the virtual environment changes, the camera model changes while following the position of the virtual character in the virtual environment, and the camera model is always within the preset distance range of the virtual character in the virtual environment. Optionally, the relative positions of the camera model and the virtual character do not change during the automatic following process.
The camera model is a three-dimensional model positioned around a virtual character in a virtual environment, and when a first-person visual angle is adopted, the camera model is positioned near the head of the virtual character or positioned at the head of the virtual character; when a third person perspective view is adopted, the camera model can be located behind the virtual character and bound with the virtual character, or located at any position away from the virtual character by a preset distance, the virtual character located in the virtual environment can be observed from different angles through the camera model, and optionally, when the third person perspective view is the shoulder-crossing perspective view of the first person, the camera model is located behind the virtual character (such as the head and the shoulder of the virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be positioned over the head of the virtual character when a top view is used, which is a view of viewing the virtual environment from an aerial top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment view displayed by the user interface.
To illustrate an example where the camera model is located at any position away from the virtual character by a preset distance, optionally, one virtual character corresponds to one camera model, and the camera model may rotate with the virtual character as a rotation center, for example: the camera model is rotated with any point of the virtual character as a rotation center, the camera model rotates not only angularly but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model rotates on the surface of a sphere with the rotation center as the sphere center, wherein any point of the virtual character can be the head, the trunk or any point around the virtual character, which is not limited in the embodiment of the present application. Optionally, when the virtual character is observed by the camera model, the center of the view angle of the camera model points to the direction in which the point of the spherical surface on which the camera model is located points to the center of the sphere.
Optionally, the camera model may also observe the virtual character at a preset angle in different directions of the virtual character.
Referring to fig. 4, schematically, a point is determined in the virtual character 11 as a rotation center 12, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position above and behind the virtual character (for example, a rear position of the brain). Illustratively, as shown in fig. 4, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.
Optionally, the virtual environment displayed by the virtual environment screen includes: ladder, vertical ladder, climbing rock area, mountain, flat ground, river, lake, sea, desert, marsh, quicksand, sky, plant, building, vehicle.
For example, in an alternative scenario, a first avatar is surrounded by a plurality of second avatars, which attack the first avatar simultaneously.
Illustratively, as shown in fig. 5, in a virtual environment screen 601 obtained from a first-person perspective of a first avatar, a hand 602 of the first avatar and two second avatars 603 are included.
Stun skills are the skills possessed by the first virtual character. Illustratively, stunning skills are passive skills, i.e., skills that are automatically used when skill use conditions are met. Illustratively, the stunning skill is a skill that is set and brought into the game before the first avatar enters the game, or the stunning skill is a skill that the first avatar acquires in the game. Illustratively, stunning skills are skills that the first avatar purchases using gold coins after entering the game.
Illustratively, the stunning skill is a skill that acts on the second virtual character, and may have a stunning effect on the second virtual character. The stunning effect includes: stunning, lowering at least one of state values (life value, movement speed, attack force, defense force, recovery speed, etc.) of the second avatar. Stunning refers to limiting the movement, attack, of the second avatar. Illustratively, stun refers to a state in which the second avatar is stopped in place and cannot be moved and attacked.
Step 302, in response to the number of defeated numbers of the first virtual character meeting the number threshold, controlling the first virtual character to use the stunning skill, the number of defeated numbers being the number of the first virtual character defeating the second virtual character.
Illustratively, the second virtual character has a life value, and the first virtual character can attack the second virtual character to lower the life value of the second virtual character, and the second virtual character dies when the life value of the second virtual character is lower than a threshold (e.g., the threshold is 0). For example, the defeat means that the attack of the first virtual character causes the life value of the second virtual character to be lower than a threshold value, and the second virtual character is defeated by the first virtual character, for example, the life value of the second virtual character is 100, the attack of the third virtual character on the second virtual character causes the life value of the second virtual character to be reduced from 100 to 1, and the attack of the first virtual character on the second virtual character causes the life value of the second virtual character to be lower than the threshold value, and the attack of the second virtual character is regarded as that the attack of the first virtual character causes the life value of the second virtual character to be lower than the threshold value, and the second virtual character is defeated by the first virtual character.
Illustratively, the number of beats of a first avatar plus one every time the first avatar beats a second avatar.
The number threshold is an arbitrarily set numerical value, and for example, the number threshold may be 10, 30, 50, or the like. When a first avatar defeats a certain number of second avatars, the first avatar automatically uses stunning skills.
And 303, controlling a second virtual character positioned in the action range of the stunning skill to be subjected to the stunning effect, wherein the action range comprises an area range determined in the virtual environment according to the position of the first virtual character.
After the first virtual character uses the stunning skill, the client acquires the action range of the stunning skill and controls the second virtual character located in the action range to be stunned.
Illustratively, the range of action may be a three-dimensional spatial range or a two-dimensional planar range. Illustratively, the scope of action is a spherical or circular range centered on the location of the first virtual character and having a radius of some distance. For example, the action range may be an annular range that takes the position of the first virtual character as a center, a certain distance as an inner diameter, and a certain distance as an outer diameter. For example, the scope of action may be a sector range in which the position of the first virtual character is the vertex and the first virtual character faces the direction.
In an alternative embodiment, after the first virtual character uses the stunning skill, the client acquires the position of each second virtual character in the virtual environment, calculates the distance between each second virtual character and the first virtual character, and determines the second virtual character with the distance smaller than the threshold value as the second virtual character located in the action range.
Illustratively, the stunning effect corresponds to: damage value, stunning duration. Illustratively, the client lowers the life value of the second avatar by the injury value, controls the second avatar to be stunned, and continues the stunned state until the stunning duration is over. Illustratively, the stunning effect may further include: and reducing the defense value and the moving speed of the second virtual character, and clearing at least one of the anger values of the second virtual character.
In summary, in the method provided in this embodiment, the stunning skill is set for the first virtual character, and after the first virtual character knocks down a certain number of zombies (second virtual characters), the first virtual character is automatically controlled to use the stunning skill to stun zombies near the first virtual character, so that the first virtual character can limit activities of surrounding zombies, and then the zombies are quickly separated from the enclosure of the zombies when being stunned and unable to move. The stunning skill is a skill (passive skill) automatically used when the skill use condition is met, the skill can be triggered without extra operation by a user, zombies nearby the stunning are stunned, the user operation is simplified, and the man-machine interaction efficiency of the user for controlling the first virtual character to get rid of the stunning operation is improved.
Illustratively, in one alternative embodiment, the stunning skill is a skill that the first avatar purchases after entering the game. Illustratively, the stunning skill is a randomly triggered skill.
Fig. 6 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. Taking the execution subject of the method as an example of the client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, and according to the exemplary embodiment shown in fig. 3, step 301 further includes step 401 to step 402, step 302 further includes step 3021 to step 3023, and step 303 further includes step 501.
Step 401, in response to the distance between the first virtual character and the target object being less than the distance threshold, displaying a skill purchasing interface, the skill purchasing interface being used for purchasing stunning skills, the target object being a virtual object controlled by the server.
Illustratively, defeating of a first virtual character by a second virtual character may result in a corresponding number of virtual items. Illustratively, the virtual items are used to purchase skills, weapons, equipment, vehicles, open maps, unlock new areas, etc. within the present pairing. Illustratively, the virtual items may be gold coins, coupons, game coins, shells, diamonds, and the like. Illustratively, a first avatar defeating a second avatar may obtain 10 coins. For example, as shown in fig. 5, an information field 604 of a first virtual character is displayed in the upper left corner of the virtual environment screen 601, the number of coins of the first virtual character is recorded in the information field, and the number of coins is increased by 10 every time the first virtual character kills a second virtual character.
For example, the first avatar may also obtain the virtual items of other avatars by killing other client-controlled avatars, or the first avatar may also obtain the virtual items by selling skills, weapons, equipment, vehicles, or the first avatar may also obtain the virtual items by completing specified tasks.
For example, a first virtual character may purchase stunning skills using a virtual item. Illustratively, a target object is provided in the virtual environment. The target object is used to purchase skills. For example, the target object may be at least one of a vending machine, a shop, and an NPC (Non-Player Character) provided in the virtual environment. Illustratively, the target object is an object that is stationary in the virtual environment, or an object that can only move within a small range. Illustratively, the target object has a three-dimensional virtual model in a virtual environment, e.g., as shown in FIG. 7, there is a target object 605 in the virtual environment. When the first virtual character approaches the target object, the client displays a skill purchasing interface. Illustratively, the skill purchasing interface includes a purchase control, and when the user triggers the purchase control, the client controls the first virtual character to purchase the corresponding item. For example, as shown in fig. 8, when the first virtual character approaches the target object 605, a purchase interface is displayed, the purchase interface includes a purchase control 606, the user triggers the purchase control 606 to purchase the skill "shock cherry", and after the user purchases the "shock cherry", as shown in fig. 9, an icon corresponding to the "shock cherry" is displayed on the skill bar 607 to inform the user that the first virtual character currently possesses the skill "shock cherry".
For example, the client may mark the location of the target object on a minimap of the virtual environment so that the user may find the target object from the minimap and purchase the skills. For example, as shown in fig. 9, a small map of the virtual environment is displayed in the upper right corner of the user interface, the position of the target object is marked with a black triangle 612 in the small map, and the user can find the target object according to the position of the black triangle 612.
For example, the client may detect in real time whether the distance between the first avatar and the target object is less than a distance threshold to determine whether the first avatar is close to the target object.
For example, the client may further set a collision detection model on the target object, and use the collision detection model to detect whether the first avatar is close to the target object.
Illustratively, the target object is provided with a collision detection model, and the collision detection model is used for detecting the distance between the first virtual character and the target object. A skill purchasing interface is displayed in response to the collision of the three-dimensional virtual model of the first virtual character with the collision detection model.
The collision detection model is a three-dimensional box disposed on a three-dimensional virtual model of the target object. Illustratively, the collision detection model is an invisible box (collision box), i.e. the collision detection model is not visible to the user in the virtual environment view. Illustratively, the size and shape of the collision detection model is set according to the size and shape of the three-dimensional virtual model of the target object. For example, the size and shape of the collision detection model is the same as the size and shape of the three-dimensional virtual model of the target object. Or the size of the collision detection model is slightly smaller than the size of the three-dimensional virtual model of the target object. Or the size of the collision detection model is slightly larger than that of the three-dimensional virtual model of the target object, so that the collision detection model wraps the target object.
Illustratively, to simplify the calculation process, the collision detection model is generally set to a regular shape, for example, a cube, a rectangular parallelepiped, a sphere, a pyramid, a cylinder, or the like. Illustratively, the collision detection model is provided as a three-dimensional virtual model that is slightly larger than the target object, such that the collision detection model wraps around the target object.
For example, the collision detection model may detect a collision in the virtual environment, and when there is a collision between another virtual model and a surface of the collision detection model, the collision detection model may generate collision information, where the collision information includes: at least one of information of the virtual model, a collision point, and a collision time. The information of the virtual model includes: the type of the virtual model, the size of the virtual model, the material of the virtual model, the account number of the virtual character and the state of the virtual character. Illustratively, the virtual character has a three-dimensional virtual model in the virtual environment, and when the virtual model of the virtual character collides with the collision detection model, the collision detection model acquires collision information and determines that the virtual character is close to the target object according to the collision information.
For example, as shown in fig. 10, a collision detection model 608 slightly larger than the target object 605 is provided on the target object 605.
And 402, in response to the purchasing operation, reducing the number of virtual articles owned by the first virtual character and controlling the first virtual character to obtain stunning skills, wherein the virtual articles are obtained by the first virtual character by defeating the second virtual character.
Illustratively, the purchase operation may be an operation in which the user triggers a UI control on the purchase interface. Alternatively, the purchase operation may be an operation in which the user inputs an instruction using an input device such as a mouse, a keyboard, a microphone, or a camera.
The client determines the skill which the user wants to purchase according to the purchasing operation of the user, acquires the number of virtual articles required by the skill, correspondingly reduces the number of the virtual articles of the first virtual role, and controls the first virtual role to acquire the skill. Illustratively, if the number of virtual items of the first virtual character is not sufficient to pay for the skill, the user is prompted that the number of virtual items is not sufficient.
For example, as shown in fig. 8, if the virtual item required for purchasing the skill "electric cherry shock" is 2000 gold coins and the first virtual character currently has 9410 gold coins, the client reduces the number of gold coins of the first virtual character by 2000, so that the first virtual character obtains the skill "electric cherry shock".
Step 3021, in response to the beat count of the first avatar satisfying a quantity threshold, determining whether to trigger a stunning skill according to the first probability.
Illustratively, the stunning skill is a skill that is randomly triggered after the skill use condition is satisfied, i.e., the stunning number of the first virtual character satisfying the number threshold does not necessarily control the first virtual character to use the stunning skill, but rather there is a certain probability that the first virtual character will be triggered to use the stunning skill.
For example, the first probability may be 80%, i.e., when the number of knockouts for the first avatar satisfies a number threshold, there is an 80% likelihood that the first avatar will be triggered to use stunning skills.
For example, when there are teammates around the first avatar, the effect of stunning skills may also be changed according to the number of teammates. As shown in fig. 11, step 3021 further includes steps 3021-1 to 3021-3, and step 303 further includes step 3031 and step 3032.
Step 3021-1, in response to the number of defeated first avatars meeting the number threshold, obtaining a number of target avatars located within a first range, where the first range is an area range determined in the virtual environment according to the positions of the first avatars.
For example, when the number of defeateds of the first avatar satisfies the number threshold, the client may obtain the number of target avatars within the first range. The target avatar may be an avatar that is co-sited with the first avatar. The target avatar may also be a second avatar. That is, the client may determine the probability of triggering the stunning skill according to the number of teammates around the first virtual character, or may determine the probability of triggering the stunning skill according to the number of zombies around the first virtual character. Illustratively, the target avatar may also be the number of teammates that are being attacked by the second avatar, or the target avatar may also be an avatar controlled by the client.
The first range may be a spherical range or a circular range centered at a position where the first virtual character is located and having a certain distance as a radius. Illustratively, the first range may or may not be the same as the range of action of the stunning technique.
Step 3021-2, in response to the number of target virtual characters not meeting the threshold, determining whether to trigger a stunning skill according to the first probability.
Illustratively, if the number of teammates around the first avatar is small, determining whether to trigger stunning skills according to the first probability. The threshold may be any value, for example, the threshold may be 0, i.e., when there are no teammates around the first avatar, it is determined with a first probability whether a stunning skill is triggered.
And step 3021-3, in response to the number of target virtual characters satisfying a threshold, determining whether to trigger a stunning skill according to a second probability, the second probability not being equal to the first probability.
For example, if the number of teammates around the first virtual character is large, whether to trigger stunning skills is determined according to the second probability. For example, the threshold may be 0, i.e., when the first avatar has teammates around it, it is determined with a second probability whether to trigger stunning skills. Illustratively, the first probability may be greater than the second probability, or the first probability may be less than the second probability.
For example, when a first avatar has teammates, there is a greater probability that the first avatar will trigger stunning skills.
Step 3022, in response to triggering the stunning skill, controlling the first avatar to use the stunning skill.
And the client determines whether the stunning skill is triggered or not according to the first probability, and controls the first virtual role to use the stunning skill if the stunning skill is triggered.
Step 3023, in response to the stunning skill not being triggered, zeroing the stunning number of the first avatar.
And the client determines whether to trigger the stunning skill according to the first probability, if not, the first virtual character is not controlled to use the stunning skill, the beat number of the first virtual character is reset, the beat number of the first virtual character is recalculated, and when the beat number meets the number threshold value again, whether to trigger the stunning skill is judged again.
Step 3031, responding to the condition that the number of the target virtual characters does not meet the threshold value, and controlling the second virtual characters within the action range of the stunning skill to be subjected to the first stunning effect.
Step 3032, in response to the number of the target virtual characters meeting the threshold, controlling the second virtual characters within the action range of the stunning skill to be subjected to a second stunning effect.
Wherein the first stunning effect and the second stunning effect have different maximum action durations or damage values.
For example, taking a circular area with the action range centered on the first avatar 609 and a certain distance as a radius as an example, as shown in fig. 12, taking the first avatar 609 as a center and a circle 610 with a radius R, where the circle 610 is the action range, three second avatars 603 located inside the circle 610 will receive the first stunning effect of stunning skill, and two second avatars 603 located outside the circle 610 will not receive the first stunning effect of stunning skill.
Illustratively, the effect of stunning skills has a long duration of action. For example, the stunning effect corresponds to a maximum duration of action. After the second virtual character is subjected to the stunning effect, the client can time the acting duration of the second virtual character subjected to the stunning effect; and stopping the stunning effect of the second virtual character in response to the action duration reaching the maximum action duration. That is, the stunning skill will stun the second avatar only for a period of time, after which the second avatar will return to normal.
Illustratively, when a first avatar does not have a teammate around it, the stunning skills used by the first avatar may cause a first stunning effect on a second avatar around it. When the teammates are around the first virtual character, the stunning skill is triggered, the stunning effect of the stunning skill can be increased, and the stunning effect of the stunning skill is increased from the first stunning effect to the second stunning effect. By way of example, an increased stunning effect is an increased injury value or an increased maximum duration of action of the stunning.
For example, if there is no teammate around the first avatar, the stunning skill triggered at this time may cause 100-point injury to the second avatar and stun the second avatar 10 s. If there is a teammate around the first avatar, the stunning skill triggered at this time would cause 300-point injury to the second avatar and stun the second avatar 15 s.
Illustratively, the second virtual character is controlled to be subjected to a third stunning effect in response to the second virtual character being within the stunning skill range being hit by at least two stunning skills. When a plurality of virtual characters stun a second virtual character using stunning skills, the stunning effect on the second virtual character is increased. For example, if both the first avatar and the third avatar use stunning skills to stun the second avatar, the second avatar will experience two or three times the effect of the first stunning. The doubling of the stunning effect means: doubling the injury value and doubling the maximum action time of the stunned object. Illustratively, the third stunning effect is a doubled stunning effect.
For example, one avatar may inflict 100-point injury and stun 10s on a second avatar using stunning skills, and two avatars acting on the same second avatar using stunning skills, the second avatar is inflicted 400-point injury and stunned 40 s.
Step 501, displaying a stunning effect corresponding to the second virtual character, wherein the stunning effect includes at least one of displaying a stunning mark or a text prompt on the head of the second virtual character, displaying the second virtual character in a distinguishing manner, and controlling the second virtual character to fall to the ground.
Illustratively, the stunned second avatar may display a stunning effect. The stun special effect is used for informing the user that the second virtual character is stunned, and the user can escape regardless of the second virtual character or defeat the second virtual character while the user is still in mind.
For example, the stunning effect may be a stunning mark or a text prompt displayed on the head of the second virtual character, or a special effect light column displayed behind the second virtual character, or a distinctive display (e.g., highlighting, changing the color of the three-dimensional virtual model, strengthening a stroking edge, etc.) of the second virtual character, or a display showing the second virtual character falling on the ground, etc.
For example, as shown in FIG. 13, the stunned head of the second avatar 603 may display a stunning effect 611, prompting the user that both avatars have stunned.
In summary, according to the method provided in this embodiment, when the first virtual character kills a certain number of zombies, the first virtual character is automatically controlled to stun the zombies using stunning skills, and when the first virtual character is surrounded by the zombies, the stunning skills can be triggered only by killing the certain number of zombies by the user, so that the zombies are stunned, the first virtual character can escape quickly, user operation is simplified, and human-computer interaction efficiency of the user in controlling the first virtual character to escape is improved.
According to the method provided by the embodiment, after the first virtual character kills the zombies each time, whether the number of the zombies killed by the first virtual character reaches the threshold value or not is judged, whether the stunning skill is triggered by the first virtual character or not is judged, and then the first virtual character is automatically controlled to use the stunning skill, so that the user operation is simplified, and the human-computer interaction efficiency of the user for controlling the first virtual character to get rid of trouble is improved.
According to the method provided by the embodiment, the stunning skill is a skill which is randomly used after the skill use condition is met, and the stunning skill is not necessarily triggered to be used when the first virtual character meets the skill use condition, so that the frequency of using the stunning skill by the first virtual character is reduced, the use of the stunning skill is unpredictable, the variability and the unknowability of the game are increased, the fierce degree of the game is improved, the game time is reduced, and the load of a server is reduced.
In the method provided by this embodiment, the gold coin is obtained by killing the zombie with the first virtual character, and the stunning skill is purchased by the first virtual character using the gold coin, so as to limit the obtaining of the stunning skill. Through setting the skill vending machine, the first virtual character purchases stun skills through gold coins in the skill vending machine, when the first virtual character is close to the skill vending machine, a skill purchasing interface is displayed, a user purchases the desired skills on the skill purchasing interface, the operation of purchasing the skills by the user is simplified, and the human-computer interaction efficiency is improved.
According to the method provided by the embodiment, whether the first virtual character is close to the skill vending machine or not is detected by setting the collision detection model on the skill vending machine, so that a skill purchasing interface is automatically displayed, the operation of purchasing skills of a user is simplified, and the human-computer interaction efficiency is improved.
According to the method provided by the embodiment, the stunning special effect is displayed on the stunned zombie, so that a user can conveniently and quickly identify the stunned zombie.
According to the method provided by the embodiment, after the zombies are controlled to be stunned for a period of time, the zombies are automatically controlled to be recovered to be normal, and normal operation of the game is not influenced while the separation time is provided for the first virtual role.
According to the method provided by the embodiment, when teammates exist beside the first virtual character, the probability of triggering the stunning skill is increased, the action effect of the stunning skill is increased, the multiple virtual characters can conveniently escape from the enclosure of zombies, the operation of controlling the virtual characters to escape from the enclosure by users is simplified, and the human-computer interaction efficiency is improved.
An exemplary embodiment of a control method for using a virtual character provided by the present application in a first-person shooter game is given as an example.
Fig. 14 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment. The method comprises the following steps.
And step 701, entering a zombie mode.
Illustratively, a first person shooting game provides a plurality of game modes for a user, wherein a zombie which automatically attacks a virtual character is arranged in the zombie mode, and the user needs to control the virtual character to attack and kill the zombie to ensure that the virtual character survives and can win the victory.
Step 702, judging whether the virtual character kills the zombies. If killing, go to step 703; otherwise, go to step 701.
Step 703, the virtual character obtains the gold coin.
When the botnet is killed by the virtual character, the virtual character can correspondingly acquire the gold coins.
And step 704, judging whether the virtual character is close to the skill machine, if so, performing step 705, otherwise, performing step 703.
Step 705, displaying gold coins required for purchasing skills and a purchase button.
Step 706, judging whether the virtual character kills the zombie, if so, performing step 707, otherwise, performing step 705.
Step 707, zombie death.
Illustratively, when a virtual character kills a zombie, the client correspondingly records the defeat number of the virtual character.
Step 708, judging whether to trigger stunning skill, if yes, performing step 709, otherwise, performing step 707.
The client judges whether the failure number of the virtual role meets the number threshold, and if so, determines whether to trigger stunning skill according to the probability.
Step 709, controlling the dead zombies to be stunned and not to move and attack the players.
Virtual characters use stunning skills to stun zombies within the scope of action.
Step 710, determining whether the stunning time is finished, if yes, performing step 711; otherwise, go to step 709.
And step 711, controlling the zombies to recover to be normal.
And when the stunning time is over, the client controls the zombies to recover to be normal.
Illustratively, the embodiment also provides an interaction method between the client and the server when the virtual character obtains the gold coin and uses the gold coin. Fig. 15 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The method comprises the following steps:
step 801, the client controls the virtual character to attack the zombie.
And step 802, the client reports the damage value of the bot attacked by the virtual character to the server.
Step 803, the server checks the damage value and determines that the damage is legal.
Step 804, when the damage is legal, the server returns a killing success message to the client.
Step 805, after receiving the killing success message sent by the server, the client controls the death of the zombies.
And 806, correspondingly increasing the number of gold coins obtained by the virtual character killing zombies by the server.
In step 807, the server sends a protocol to the client to notify the client to update the gold coin count.
And 808, correspondingly increasing the gold coin number of the virtual role by the client.
Step 809, the client receives an instruction from the user to purchase a weapon or unlock a zone using gold coins.
In step 810, the client sends a protocol using gold coins to the server.
In step 811, the server checks the validity of the transaction.
At step 812, the server returns to the client a useable gold coin protocol after determining that the transaction is legitimate.
In step 813, the client determines that the transaction is successful.
In summary, in the method provided in this embodiment, the virtual character can obtain the gold coins by killing the zombies, the virtual character can purchase skills with the gold coins on the skill machine, the stunning skills are obtained after successful purchase, when the virtual character kills a certain number of zombies, the virtual character is controlled to automatically use the stunning skills, when the virtual character is surrounded by a large number of zombies, the zombies can be stunned by using the stunning skills, and the user can control the virtual character to escape from the surrounding of the zombies.
According to the method provided by the embodiment, when the client side kills the zombies, the server verifies the damage of the virtual character and determines whether to kill the zombies. After the bots are killed by the virtual characters, the server sends a gold coin increasing instruction to the client, and when the client needs to use gold coins, the server checks the transaction validity, so that the accuracy of the quantity of the gold coins of the virtual characters is guaranteed.
The above embodiments describe the above method based on the application scenario of the game, and the following describes the above method by way of example in the application scenario of military simulation.
The simulation technology is a model technology which reflects system behaviors or processes by simulating real world experiments by using software and hardware.
The military simulation program is a program specially constructed for military application by using a simulation technology, and is used for carrying out quantitative analysis on sea, land, air and other operational elements, weapon equipment performance, operational actions and the like, further accurately simulating a battlefield environment, presenting a battlefield situation and realizing the evaluation of an operational system and the assistance of decision making.
In one example, soldiers establish a virtual battlefield at a terminal where military simulation programs are located and fight in a team. The soldier controls a virtual object in the virtual battlefield environment to perform at least one operation of standing, squatting, sitting, lying on the back, lying on the stomach, lying on the side, walking, running, climbing, driving, shooting, throwing, attacking, injuring, reconnaissance, close combat and other actions in the virtual battlefield environment. The battlefield virtual environment comprises: at least one natural form of flat ground, mountains, plateaus, basins, deserts, rivers, lakes, oceans and vegetation, and site forms of buildings, vehicles, ruins, training fields and the like. The virtual object includes: virtual characters, virtual animals, cartoon characters, etc., each virtual object having its own shape and volume in the three-dimensional virtual environment occupies a part of the space in the three-dimensional virtual environment.
Based on the above situation, in one example, soldier a controls virtual object a to move in a virtual environment, and when soldier a controls virtual object a to be surrounded by a plurality of animals in the virtual environment, after virtual object a kills a certain number of animals, virtual object a is automatically controlled to use a stunning skill to stun the animals around, thereby facilitating virtual object a to escape from the surrounding of the animals.
In summary, in this embodiment, the control method of the virtual character is applied to a military simulation program, so that the soldier obtains a stunning skill for simulating the effect of a smoke shell or tear gas, and the virtual object is controlled to randomly stun the animals around the soldier by using the stunning skill, so that the survival ability of the soldier is improved, and the soldier is better trained.
In the following, embodiments of the apparatus of the present application are referred to, and for details not described in detail in the embodiments of the apparatus, the above-described embodiments of the method can be referred to.
Fig. 16 is a block diagram of a control device of a virtual character according to an exemplary embodiment of the present application. The device comprises:
a display module 901, configured to display a virtual environment screen, where the virtual environment screen includes: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills;
a control module 902 for controlling the first virtual character to use the stunning skill in response to a stunning number of the first virtual character satisfying a number threshold, the stunning number being a number of the first virtual character stunning the second virtual character;
the control module 902 is further configured to control the second virtual character located within an action range of the stunning skill to be stunned, where the action range includes an area range determined in the virtual environment according to the position of the first virtual character.
In an optional embodiment, the apparatus further comprises:
a determining module 903, configured to determine whether to trigger the stunning skill according to a first probability in response to the beat-to-beat number of the first virtual character satisfying a number threshold;
the control module 902 is further configured to control the first avatar to use the stunning skill in response to triggering the stunning skill.
In an optional embodiment, the apparatus further comprises:
a recording module 904 for zeroing the defeat count of the first virtual character in response to the stunning skill not being triggered.
In an alternative embodiment, the apparatus further comprises:
an obtaining module 905, configured to obtain, in response to that a failure number of the first virtual character satisfies a number threshold, a number of target virtual characters located within a first range, where the first range is an area range determined in the virtual environment according to a position of the first virtual character;
the determining module 903 is further configured to determine whether to trigger the stunning skill according to a first probability in response to the number of target virtual characters not satisfying a threshold.
In an optional embodiment, the determining module 903 is further configured to determine whether to trigger the stunning skill according to a second probability in response to the number of target virtual characters satisfying a threshold, the second probability not being equal to the first probability.
In an optional embodiment, the control module 902 is further configured to, in response to the number of target virtual characters not meeting the threshold, control the second virtual character located within the action range of the stunning skill to receive a first stunning action effect;
the control module 902 is further configured to, in response to that the number of the target virtual characters meets the threshold, control the second virtual character located within the action range of the stunning technique to receive a second stunning action effect;
wherein the first stunning effect and the second stunning effect have different maximum action time lengths or damage values correspondingly.
In an optional embodiment, the apparatus further comprises:
a purchasing module 906 for reducing the number of virtual items owned by the first virtual character and controlling the first virtual character to obtain the stunning skill in response to a purchasing operation;
wherein the virtual item is obtained by the first avatar by defeating the second avatar.
In an optional embodiment, the display module 901 is further configured to display a skill purchasing interface for purchasing the stunning skill in response to the distance between the first virtual character and a target object being a virtual object controlled by a server being smaller than a distance threshold.
In an optional embodiment, a collision detection model is disposed on the target object, and the collision detection model is configured to detect a distance between the first virtual character and the target object;
the display module 901 is further configured to display the skill purchasing interface in response to the collision of the three-dimensional virtual model of the first virtual character with the collision detection model.
In an optional embodiment, the display module 901 is further configured to display a stunning special effect corresponding to the second virtual character, where the stunning special effect includes at least one of displaying a stunning mark or a text prompt on a head of the second virtual character, displaying the second virtual character differently, and controlling the second virtual character to fall over.
In an optional embodiment, the first stunning effect corresponds to a maximum effect duration, and the apparatus further includes:
a timing module 907 for timing an action duration of the second virtual role subjected to the stunning action effect;
the control module 902 is further configured to stop the stunning effect of the second avatar in response to the action duration reaching the maximum action duration.
In an alternative embodiment, the control module 902 is further configured to control the second avatar to receive a third stunning effect in response to the second avatar within the stunning skill's scope being hit by at least two stunning skills.
It should be noted that: the control device of the virtual character provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the control device of the virtual character provided in the above embodiments and the control method embodiment of the virtual character belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
Fig. 17 shows a block diagram of a terminal 2000 according to an exemplary embodiment of the present application. The terminal 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio layer iii, motion video Experts compression standard Audio layer 3), an MP4 player (Moving Picture Experts Group Audio layer IV, motion video Experts compression standard Audio layer 4), a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 2000 includes: a processor 2001 and a memory 2002.
The processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2001 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2001 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 2001 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 2002 may include one or more computer-readable storage media, which may be non-transitory. The memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the control method for virtual characters provided by method embodiments herein.
In some embodiments, terminal 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002 and peripheral interface 2003 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2004, a touch display 2005, a camera 2006, an audio circuit 2007, a positioning assembly 2008, and a power supply 2009.
The peripheral interface 2003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2001 and the memory 2002. In some embodiments, the processor 2001, memory 2002 and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2001, the memory 2002, and the peripheral interface 2003 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 2004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 2004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 2004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2005 is a touch display screen, the display screen 2005 also has the ability to capture touch signals on or over the surface of the display screen 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 2005 may be one, providing the front panel of terminal 2000; in other embodiments, the display screens 2005 can be at least two, respectively disposed on different surfaces of the terminal 2000 or in a folded design; in still other embodiments, display 2005 may be a flexible display disposed on a curved surface or a folded surface of terminal 2000. Even more, the display screen 2005 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 2005 can be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 2006 is used to capture images or video. Optionally, camera assembly 2006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing or inputting the electric signals to the radio frequency circuit 2004 so as to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different positions of the terminal 2000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 2007 may also include a headphone jack.
The positioning component 2008 is configured to locate a current geographic location of the terminal 2000 to implement navigation or LBS (location based Service). The positioning component 2008 may be a positioning component based on a GPS (global positioning System) in the united states, a beidou System in china, or a galileo System in russia.
Power supply 2009 is used to power the various components in terminal 2000. The power supply 2009 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 2009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2000 also includes one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, fingerprint sensor 2014, optical sensor 2015, and proximity sensor 2016.
The acceleration sensor 2011 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 2000. For example, the acceleration sensor 2011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2001 may control the touch display screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 2012 can detect the body direction and the rotation angle of the terminal 2000, and the gyroscope sensor 2012 and the acceleration sensor 2011 can cooperate to acquire the 3D motion of the user on the terminal 2000. The processor 2001 may implement the following functions according to the data collected by the gyro sensor 2012: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 2013 may be disposed on the side bezel of terminal 2000 and/or underlying touch screen display 2005. When the pressure sensor 2013 is disposed on the side frame of the terminal 2000, the holding signal of the user to the terminal 2000 can be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at a lower layer of the touch display screen 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 2005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2014 is used for collecting fingerprints of the user, and the processor 2001 identifies the identity of the user according to the fingerprints collected by the fingerprint sensor 2014, or the fingerprint sensor 2014 identifies the identity of the user according to the collected fingerprints. Upon identifying that the user's identity is a trusted identity, the processor 2001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 2014 may be disposed on the front, back, or side of the terminal 2000. When a physical key or vendor Logo is provided on the terminal 2000, the fingerprint sensor 2014 may be integrated with the physical key or vendor Logo.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, the processor 2001 may control the display brightness of the touch display 2005 according to the ambient light intensity collected by the optical sensor 2015. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 according to the ambient light intensity collected by the optical sensor 2015.
The proximity sensor 2016, also known as a distance sensor, is typically disposed on a front panel of the terminal 2000. The proximity sensor 2016 is used to collect a distance between a user and a front surface of the terminal 2000. In one embodiment, the touch display 2005 is controlled by the processor 2001 to switch from a bright screen state to a dark screen state when the proximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 is gradually reduced; when the proximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 is gradually increasing, the touch display 2005 is controlled by the processor 2001 to switch from a rest screen state to a bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 17 is not intended to be limiting of terminal 2000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The present application further provides a computer device, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the control method for virtual roles provided in any of the above exemplary embodiments.
The present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the control method of the virtual character provided in any of the above exemplary embodiments.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the control method of the virtual machine role provided in the above-mentioned optional implementation mode.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for controlling a virtual character, the method comprising:
displaying a virtual environment screen, the virtual environment screen comprising: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills;
controlling the first virtual character to use the stunning skill in response to a stunning number of the first virtual character satisfying a number threshold, the stunning number being a number by which the first virtual character stunts the second virtual character;
controlling the second avatar within the scope of the stunning skill to be stunned, the scope including a region range determined in the virtual environment according to the position of the first avatar.
2. The method of claim 1, wherein said controlling the first virtual character to use the stunning skills in response to the first virtual character's failure number satisfying a number threshold comprises:
determining whether to trigger the stunning skill according to a first probability in response to the stunning number of the first virtual character satisfying a number threshold;
in response to triggering the stunning skill, controlling the first virtual character to use the stunning skill.
3. The method of claim 2, further comprising:
zero the defeat number of the first avatar in response to not triggering the stunning skill.
4. The method of claim 2 or 3, wherein determining whether to trigger the stunning skill according to a first probability in response to the beat count of the first avatar satisfying a quantity threshold comprises:
responding to the fact that the failure number of the first virtual character meets a number threshold, and obtaining the number of target virtual characters located in a first range, wherein the first range is an area range determined in the virtual environment according to the position of the first virtual character;
in response to the number of target virtual characters not satisfying a threshold, determining whether to trigger the stunning skill according to a first probability.
5. The method of claim 4, further comprising:
in response to the number of target virtual characters satisfying a threshold, determining whether to trigger the stunning skill according to a second probability, the second probability not equal to the first probability.
6. The method of claim 5, wherein controlling the second avatar within the range of action of the stunning skill to be stunned comprises:
in response to the number of target virtual characters not meeting the threshold, controlling the second virtual character within the action range of the stunning technique to be subjected to a first stunning action effect;
in response to the number of target virtual characters satisfying the threshold, controlling the second virtual character within the action range of the stunning technique to be subjected to a second stunning action effect;
wherein the first stunning effect and the second stunning effect have different maximum action time lengths or damage values correspondingly.
7. The method of any of claims 1 to 3, further comprising:
in response to a purchase operation, reducing the number of virtual items owned by the first virtual character, controlling the first virtual character to obtain the stunning skill;
wherein the virtual item is obtained by the first avatar by defeating the second avatar.
8. The method of claim 7, wherein said reducing the number of virtual items possessed by the first virtual character in response to a purchase operation, prior to controlling the first virtual character to attain the stunning skills, further comprises:
displaying a skill purchasing interface for purchasing the stunning skill in response to the first virtual character being less than a distance threshold from a target object, the target object being a virtual object controlled by a server.
9. The method according to claim 8, wherein a collision detection model is arranged on the target object, and the collision detection model is used for detecting the distance between the first virtual character and the target object;
the displaying a skill purchasing interface in response to the first avatar being less than a distance threshold from a target object, comprising:
displaying the skill purchasing interface in response to the collision of the three-dimensional virtual model of the first virtual character with the collision detection model.
10. The method of any of claims 1 to 3, wherein controlling the second virtual character within the scope of the stunning skill after being stunned further comprises:
and displaying a stunning special effect corresponding to the second virtual character, wherein the stunning special effect comprises at least one of displaying a stunning mark or a character prompt on the head of the second virtual character, displaying the second virtual character in a distinguishing manner and controlling the second virtual character to fall down to the ground.
11. A method according to any one of claims 1 to 3, wherein the stunning effect corresponds to a maximum duration of action, the method further comprising:
timing the action duration of the second virtual character under the action effect of the stunning;
stopping the stunning effect of the second avatar in response to the action duration reaching the maximum action duration.
12. A method according to any one of claims 1 to 3, wherein said controlling said second virtual character within the scope of said stunning skill is effected by stunning comprising:
controlling the second virtual character to be subjected to a third stunning effect in response to the second virtual character being within the stunning skill's range of action being hit by at least two of the stunning skills.
13. An apparatus for controlling a virtual character, the apparatus comprising:
a display module, configured to display a virtual environment screen, where the virtual environment screen includes: a first avatar and at least two second avatars located in a virtual environment, the first avatar possessing stunning skills;
a control module to control the first virtual character to use the stunning skill in response to a stunning number of the first virtual character satisfying a number threshold, the stunning number being a number of the first virtual character stunning the second virtual character;
the control module is further configured to control the second virtual character within an action range of the stunning skill to be subjected to a first stunning action, where the action range includes an area range determined in the virtual environment according to the position of the first virtual character.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement a method of controlling a virtual character according to any one of claims 1 to 12.
15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the control method of a virtual character according to any one of claims 1 to 12.
CN202010589764.4A 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium Active CN111589144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010589764.4A CN111589144B (en) 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010589764.4A CN111589144B (en) 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111589144A true CN111589144A (en) 2020-08-28
CN111589144B CN111589144B (en) 2023-05-16

Family

ID=72189058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010589764.4A Active CN111589144B (en) 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111589144B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112774204A (en) * 2021-01-22 2021-05-11 北京字跳网络技术有限公司 Role collision avoidance method, device, equipment and storage medium
CN113476825A (en) * 2021-07-23 2021-10-08 网易(杭州)网络有限公司 Role control method, role control device, equipment and medium in game
CN113713373A (en) * 2021-08-27 2021-11-30 网易(杭州)网络有限公司 Information processing method and device in game, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070298881A1 (en) * 2006-06-12 2007-12-27 Hiroaki Kawamura Game apparatus and program
CN111265872A (en) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN111298440A (en) * 2020-01-20 2020-06-19 腾讯科技(深圳)有限公司 Virtual role control method, device, equipment and medium in virtual environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070298881A1 (en) * 2006-06-12 2007-12-27 Hiroaki Kawamura Game apparatus and program
CN111265872A (en) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN111298440A (en) * 2020-01-20 2020-06-19 腾讯科技(深圳)有限公司 Virtual role control method, device, equipment and medium in virtual environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
佚名: "《精英部队》技能系统", 《快吧网游》 *
街机时代: "街机游戏中"一击必晕"的招式,三秒钟对于我来说已经足够了", 《百度百家号》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112774204A (en) * 2021-01-22 2021-05-11 北京字跳网络技术有限公司 Role collision avoidance method, device, equipment and storage medium
CN112774204B (en) * 2021-01-22 2023-10-20 北京字跳网络技术有限公司 Role collision avoidance method, device, equipment and storage medium
CN113476825A (en) * 2021-07-23 2021-10-08 网易(杭州)网络有限公司 Role control method, role control device, equipment and medium in game
CN113476825B (en) * 2021-07-23 2024-05-10 网易(杭州)网络有限公司 Role control method, role control device, equipment and medium in game
CN113713373A (en) * 2021-08-27 2021-11-30 网易(杭州)网络有限公司 Information processing method and device in game, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN111589144B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN111249730B (en) Virtual object control method, device, equipment and readable storage medium
CN110433488B (en) Virtual character-based fight control method, device, equipment and medium
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN110613938B (en) Method, terminal and storage medium for controlling virtual object to use virtual prop
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN111659119B (en) Virtual object control method, device, equipment and storage medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN111389005B (en) Virtual object control method, device, equipment and storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN111589144B (en) Virtual character control method, device, equipment and medium
CN110755844B (en) Skill activation method and device, electronic equipment and storage medium
CN111228809A (en) Operation method, device, equipment and readable medium of virtual prop in virtual environment
WO2021159795A1 (en) Method and apparatus for skill aiming in three-dimensional virtual environment, device and storage medium
CN112076467A (en) Method, device, terminal and medium for controlling virtual object to use virtual prop
CN111338534A (en) Virtual object game method, device, equipment and medium
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN110478904B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN110917618A (en) Method, apparatus, device and medium for controlling virtual object in virtual environment
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027318

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant