CN111589144B - Virtual character control method, device, equipment and medium - Google Patents

Virtual character control method, device, equipment and medium Download PDF

Info

Publication number
CN111589144B
CN111589144B CN202010589764.4A CN202010589764A CN111589144B CN 111589144 B CN111589144 B CN 111589144B CN 202010589764 A CN202010589764 A CN 202010589764A CN 111589144 B CN111589144 B CN 111589144B
Authority
CN
China
Prior art keywords
virtual character
virtual
stunning
character
skills
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010589764.4A
Other languages
Chinese (zh)
Other versions
CN111589144A (en
Inventor
姚丽
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010589764.4A priority Critical patent/CN111589144B/en
Publication of CN111589144A publication Critical patent/CN111589144A/en
Application granted granted Critical
Publication of CN111589144B publication Critical patent/CN111589144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a control method, device, equipment and medium for virtual roles, and relates to the field of virtual environments. The method comprises the following steps: displaying a virtual environment screen, the virtual environment screen comprising: a first virtual character and at least two second virtual characters located in a virtual environment, the first virtual character possessing stunning skills; controlling the first virtual character to use the stunning skill in response to a number of defeats of the first virtual character satisfying a number threshold, the number of defeats being a number of defeats of the second virtual character by the first virtual character; controlling the second virtual character within the scope of action of the stunning skill to be subjected to a stunning action effect, the scope of action comprising a region scope determined in the virtual environment according to the position of the first virtual character. The method can simplify the operation of the user for controlling the virtual character to get rid of the trouble, and improve the man-machine interaction efficiency.

Description

Virtual character control method, device, equipment and medium
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a virtual character control method, device, equipment and medium.
Background
In an application program based on a three-dimensional virtual environment, such as a first person shooting game, a user can control virtual characters in the virtual environment to perform walking, running, climbing, shooting, fighting and other actions.
In a first person shooter game in a zombie mode, a computer-controlled zombie attacks a virtual character controlled by a client, and the virtual character needs to hit the zombie in a virtual environment to ensure survival of the zombie, so that winning can be achieved. When a large number of zombies surround the virtual character, a user can control the virtual character to kill or knock the zombies beside the body by triggering a shooting control so as to get rid of the virtual character.
When the number of zombies around the virtual character is too large, the user needs to continuously aim and continuously click the shooting control to kill or knock back the zombies, so that the virtual character can be stranded, the user operation is too complex, and the human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a control method, a device, equipment and a medium for a virtual character, which can simplify the operation of a user for controlling the virtual character to get rid of the trouble when the virtual character is surrounded by a zombie, and improve the man-machine interaction efficiency. The technical scheme is as follows:
In one aspect, a method for controlling a virtual character is provided, the method including:
displaying a virtual environment screen, the virtual environment screen comprising: a first virtual character and at least two second virtual characters located in a virtual environment, the first virtual character possessing stunning skills;
controlling the first virtual character to use the stunning skill in response to a number of defeats of the first virtual character satisfying a number threshold, the number of defeats being a number of defeats of the second virtual character by the first virtual character;
controlling the second virtual character within the scope of action of the stunning skill to be subjected to a stunning action effect, the scope of action comprising a region scope determined in the virtual environment according to the position of the first virtual character.
In another aspect, there is provided a control apparatus of a virtual character, the apparatus including:
the display module is used for displaying a virtual environment picture, and the virtual environment picture comprises: a first virtual character and at least two second virtual characters located in a virtual environment, the first virtual character possessing stunning skills;
a control module for controlling the first virtual character to use the stunning skill in response to a number of defeats of the first virtual character satisfying a number threshold, the number of defeats being a number of defeats of the second virtual character by the first virtual character;
The control module is further configured to control the second virtual character located within an action range of the stunning skill to be subjected to a stunning action effect, where the action range includes a region range determined in the virtual environment according to a position of the first virtual character.
In another aspect, a computer device is provided, the computer device including a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the method of controlling a virtual character as described in the above aspect.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the method of controlling a virtual character as described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the control method of the virtual machine role provided in the above-mentioned alternative implementation.
The beneficial effects that technical scheme that this application embodiment provided include at least:
by setting the stunning skills for the first virtual character, after the first virtual character defeats a certain number of zombies (second virtual character), the first virtual character is automatically controlled to use the stunning skills to stunning the zombies nearby the first virtual character, so that the first virtual character can limit the activities of surrounding zombies, and further, the zombies can be quickly separated from the surrounding rings of the zombies when the zombies are stunned and cannot be activated. The stunning skills are the skills (passive skills) automatically used when the skill using conditions are met, the skills can be triggered without additional operation by a user, the zombies near the stunning are stunned, the user operation is simplified, and the man-machine interaction efficiency of the user for controlling the first virtual character getting rid of the trouble operation is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural view of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a method flow diagram of a method for controlling a virtual character provided in an exemplary embodiment of the present application;
FIG. 4 is a schematic view of a camera model corresponding to a perspective of a virtual object provided in an exemplary embodiment of the present application;
FIG. 5 is a user interface diagram of a method of controlling a virtual character provided in an exemplary embodiment of the present application;
fig. 6 is a method flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 7 is a user interface schematic diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 8 is a user interface schematic diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 9 is a user interface schematic diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 10 is a schematic view of a collision detection model of a control method of a virtual character according to another exemplary embodiment of the present application;
fig. 11 is a method flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
Fig. 12 is a schematic view of an action range of a control method of a virtual character according to another exemplary embodiment of the present application;
fig. 13 is a user interface schematic diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 14 is a method flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 15 is a method flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 16 is a block diagram of a control apparatus of a virtual character provided in another exemplary embodiment of the present application;
fig. 17 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, the terms involved in the embodiments of the present application will be briefly described:
virtual environment: is a virtual environment that an application displays (or provides) while running on a terminal. The virtual environment may be a simulated world of the real world, a semi-simulated and semi-imaginary world, or a purely imaginary world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual roles: refers to movable objects in a virtual environment. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional stereoscopic model created based on an animated skeleton technique. Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
First Person shooter game (FPS): the shooting game in which a user can play at a first-person perspective is a shooting game in which a virtual environment is viewed at a first-virtual-character perspective. In the game, at least two virtual characters perform a single-office fight mode in the virtual environment, the virtual characters achieve the purpose of survival in the virtual environment by avoiding attacks initiated by other virtual characters or/and dangers (such as a poison balloon, a swamp, a bomb and the like) existing in the virtual environment, when the life value of the virtual characters in the virtual environment is zero, the life of the virtual characters in the virtual environment is ended, and finally the virtual characters surviving in the virtual environment are winners. Optionally, the fight may take a moment when the first client joins the fight as a start moment and a moment when the last client exits the fight as an end moment, and each client may control one or more virtual characters in the virtual environment. Alternatively, the competitive mode of the fight may include a single fight mode, a two-person team fight mode, or a multi-person team fight mode, which is not limited in the embodiments of the present application.
The user interface UI (User Interface) controls, any visual controls or elements that can be seen on the user interface of the application, such as, for example, controls for pictures, input boxes, text boxes, buttons, tabs, etc., some of which control UI controls control shooting of the virtual character in the virtual environment in response to user operations, such as, for example, a shooting control. UI controls referred to in embodiments of the present application include, but are not limited to: shooting a control.
The method provided by the application can be applied to the application program with the virtual environment and the virtual role. The virtual environment enabled application is, for example, an application that a user can control the movement of a virtual character within the virtual environment. By way of example, the methods provided in the present application may be applied to: virtual Reality (VR) application programs, augmented Reality (Augmented Reality, AR) programs, three-dimensional map programs, virtual Reality games, augmented Reality games, first-person shooting games (FPS), third-person shooting games (Third-Person Shooting Game, TPS), multiplayer online tactical Game games (Multiplayer Online Battle Arena Games, MOBA), strategy games (SLG).
By way of example, a game in a virtual environment is composed of a map of one or more game worlds, the virtual environment in the game simulates a real world scene, a user can manipulate virtual characters in the game to walk, run, jump, shoot, fight, drive, attack other virtual characters using virtual weapons, and the like in the virtual environment, the interactivity is high, and multiple users can play competitive games in an online team.
In some embodiments, the application may be a shooting game, a racing game, a role playing game, a adventure game, a sandbox game, a tactical game, and the like. The client can support at least one of Windows operating system, apple operating system, android operating system, IOS operating system and LINUX operating system, and clients of different operating systems can be interconnected and intercommunicated. In some embodiments, the above-described client is a program suitable for use on a mobile terminal having a touch screen.
In some embodiments, the client is an application developed based on a three-dimensional engine, such as a Unity engine.
The terminals in this application may be desktop computers, laptop portable computers, cell phones, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) players, MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) players, and the like. The terminal has installed and running therein a client supporting a virtual environment, such as a client supporting an application program of a three-dimensional virtual environment. The application may be any one of a tactical game survival (BR) game, a virtual reality application, an augmented reality application, a three-dimensional map application, a third person shooter game, a first person shooter game, and a multiplayer online tactical game. Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game, or a network-on-line application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal includes a processor 101, a touch screen 102, and a memory 103.
Processor 101 may be at least one of a single core processor, a multi-core processor, an embedded chip, and a processor with instruction execution capabilities.
The touch screen 102 comprises a conventional touch screen or a pressure sensitive touch screen. A general touch screen may measure a pressing operation or a sliding operation applied to the touch screen 102; the pressure sensitive touch screen may measure the force of a press applied to the touch screen 102.
The memory 103 stores an executable program of the processor 101. Illustratively, the memory 103 stores a virtual environment program A, an application program B, an application program C, a touch pressure sensing module 18, and a kernel layer 19 of an operating system. The virtual environment program a is an application program developed based on the three-dimensional virtual environment module 17. Alternatively, the virtual environment program a includes, but is not limited to, at least one of a game program, a virtual reality program, a three-dimensional map program, and a three-dimensional presentation program developed by a three-dimensional virtual environment module (also referred to as a virtual environment module) 17. For example, when an operating system of the terminal adopts an android operating system, a virtual environment program A is developed by adopting Java programming language and C# language; for another example, when the operating system of the terminal adopts the IOS operating system, the virtual environment program a is developed in the Object-C programming language and the c# language.
The three-dimensional Virtual environment module 17 is a module supporting multiple operating system platforms, and illustratively, the three-dimensional Virtual environment module can be used for program development in multiple fields such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field.
The touch (and pressure) sensing module 18 is a module for receiving touch events (and pressure touch events) reported by the touch screen driver 191, alternatively, the touch sensing module may not have a pressure sensing function and not receive pressure touch events. The touch event includes: the type of touch event and the coordinate values, the type of touch event includes, but is not limited to: a touch start event, a touch move event, and a touch drop event. The pressure touch event includes: the pressure value and the coordinate value of the pressure touch event. The coordinate value is used for indicating the touch position of the pressure touch operation on the display screen. Optionally, establishing an abscissa axis in the horizontal direction of the display screen, and establishing an ordinate axis in the vertical direction of the display screen to obtain a two-dimensional coordinate system.
Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transferred to the pressure sensing module 18.
Other drivers 192 may be drivers associated with processor 101, drivers associated with memory 103, drivers associated with network components, drivers associated with sound components, and the like.
Those skilled in the art will appreciate that the foregoing is merely a generalized illustration of the structure of a terminal. In different embodiments, the terminal may have more or fewer components. For example, the terminal may also include a gravitational acceleration sensor, a gyroscopic sensor, a power source, and the like.
FIG. 2 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: terminal 210, server cluster 220.
The terminal 210 is installed and operated with a client 211 supporting a virtual environment, and the client 211 may be an application supporting the virtual environment. When the terminal runs the client 211, a user interface of the client 211 is displayed on a screen of the terminal 210. The client may be any one of an FPS game, a TPS game, a MOBA game, a tactical game play, a SLG game. In this embodiment, the client is exemplified as an FPS game. The terminal 210 is a terminal used by the first user 212, and the first user 212 uses the terminal 210 to control a first virtual character located in the virtual environment to perform an activity, and the first virtual character may be referred to as a first virtual character of the first user 212. The activity of the first avatar includes, but is not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first avatar, such as an emulated persona or a cartoon persona.
The device types of the terminal 210 include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only one terminal is shown in fig. 2, but in different embodiments there are a plurality of other terminals 240. In some embodiments, there is also at least one other terminal 240 that is a terminal corresponding to the developer, where the other terminal 240 is provided with a development and editing platform for the client of the virtual environment, the developer may edit and update the client on the other terminal 240, and transmit the updated client installation package to the server cluster 220 through a wired or wireless network, and the terminal 210 may download the client installation package from the server cluster 220 to implement the update for the client.
Terminal 210 and other terminals 240 are connected to server cluster 220 via a wireless network or a wired network.
Server cluster 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server cluster 220 is used to provide background services for clients that support a three-dimensional virtual environment. Optionally, server cluster 220 performs primary computing tasks and the terminals perform secondary computing tasks; alternatively, server cluster 220 performs the secondary computing job and the terminal performs the primary computing job; alternatively, a distributed computing architecture is used for collaborative computing between server cluster 220 and the terminals.
Optionally, the terminal and the server are both computer devices.
In one illustrative example, server cluster 220 includes server 221 and server 226, and server 221 includes processor 222, user account database 223, combat service module 224, and user-oriented Input/Output Interface (I/O Interface) 225. Wherein the processor 222 is configured to load instructions stored in the server 221, process data in the user account database 221 and the combat service module 224; the user account database 221 is used for storing data of user accounts used by the terminal 210 and other terminals 240, such as head images of the user accounts, nicknames of the user accounts, combat power indexes of the user accounts, and service areas where the user accounts are located; the combat service module 224 is configured to provide a plurality of combat rooms for users to combat; user-oriented I/O interface 225 is used to establish communication exchanges of data with terminal 210 via a wireless or wired network.
In connection with the description of the virtual environment and the description of the implementation environment, the method for controlling the virtual character provided in the embodiment of the present application is described, and the execution body of the method is exemplified as a client running on the terminal shown in fig. 1. The terminal is operated with an application program, which is a program supporting a virtual environment.
An exemplary embodiment of applying a virtual character control method in a zombie mode of an FPS game is provided.
In zombie mode, a virtual character may pick up a gold coin and the virtual character may use the gold coin to purchase skills in a skill vending machine (also known as a "water dispenser"), e.g., to purchase "stun" skills. Stunning skills are passive skills that are automatically used when the virtual character meets the skill usage conditions.
Illustratively, the stunning skills are used under conditions that are used randomly after a virtual character kills a specified number of zombies. For example, when 50 bots are killed by the virtual character, the virtual character is randomly controlled to automatically stun by using skills, so that the bots positioned near the virtual character are stun for a period of time, and the bots are automatically controlled to return to normal after a period of time. For example, the probability of randomness is 50%, i.e., when 50 zombies are struck by a virtual character, there will be a 50% probability that stunning skills will be used. Stunning refers to the inability of a zombie to move. Illustratively, when a zombie is stunned, a stunning special effect is displayed on the zombie's head to inform the user that the zombie has been stunned.
Illustratively, the virtual character in this embodiment refers to a virtual character controlled by a client (user), and the zombie refers to a virtual character controlled by a server/computer/artificial intelligence/automatic algorithm. Illustratively, in the bot mode, the bot is set to automatically search for nearby virtual characters and attack the virtual characters. Virtual characters need to hit and kill zombies, and the survival of the virtual characters is guaranteed.
Illustratively, the skill vending machine is randomly disposed at any location in the virtual environment, or the skill vending machine is fixedly disposed at a fixed location in the virtual environment. The virtual character may trigger a skill purchasing interface by approaching the skill vending machine, thereby purchasing the skill. Illustratively, in a minimap of the virtual environment, the location of the skill vending machine may be marked by the user by looking at the minimap to control the virtual character to approach the skill vending machine.
The method includes the steps that a collision detection model is arranged outside a model of the skill vending machine, the collision detection model is used for detecting that the virtual character approaches the skill vending machine, when the three-dimensional virtual model of the virtual character collides with the collision detection model, the collision detection model generates collision information, and a client determines that the virtual character collides with the collision detection model according to the collision information, so that a skill purchasing interface is displayed, or a skill purchasing button is popped up.
Fig. 3 is a method flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution body of the method is exemplified as a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment, the method comprising at least the following steps.
Step 301, displaying a virtual environment screen, wherein the virtual environment screen comprises: a first virtual character and at least two second virtual characters located in the virtual environment, the first virtual character possessing stunning skills.
Illustratively, after the combat begins, the client displays a combat user interface that includes a virtual environment screen and a UI control positioned above the virtual environment screen. Illustratively, the user interface prior to the combat may further include: a team formation interface for forming a friend team, a matching interface for matching other virtual roles for the virtual roles, a game loading interface for loading the current game information, and the like.
Illustratively, the first virtual character in this embodiment is a virtual character controlled by the client, i.e., the first virtual character is a master virtual character controlled by the client. Illustratively, the second avatar in this embodiment is a server/artificial intelligence/algorithm controlled avatar. Illustratively, the second avatar is set to automatically attack the avatar (first avatar) controlled by the client. Illustratively, the second virtual character is at least one of a zombie, a funeral, a monster, an animal, a bos disposed in the virtual environment.
The virtual environment screen is an exemplary screen acquired by observing the virtual environment from the perspective of the first virtual character.
The angle of view refers to an observation angle at which a first person or a third person of the virtual character observes in the virtual environment. Optionally, in an embodiment of the present application, the perspective is an angle at which the virtual character is observed by the camera model in the virtual environment.
Optionally, the camera model automatically follows the virtual character in the virtual environment, that is, when the position of the virtual character in the virtual environment changes, the camera model simultaneously changes along with the position of the virtual character in the virtual environment, and the camera model is always within a preset distance range of the virtual character in the virtual environment. Optionally, the relative positions of the camera model and the virtual character do not change during the automatic following process.
The camera model refers to a three-dimensional model located around the virtual character in the virtual environment, which is located near or at the head of the virtual character when the first-person perspective is adopted; when a third person viewing angle is adopted, the camera model can be positioned behind the virtual character and bound with the virtual character, and can also be positioned at any position with a preset distance from the virtual character, and the virtual character in the virtual environment can be observed from different angles through the camera model. Optionally, the viewing angle includes other viewing angles, such as a top view, in addition to the first-person viewing angle and the third-person viewing angle; when a top view is used, the camera model may be located above the head of the virtual character, and the top view is a view of the virtual environment from an overhead view. Alternatively, the camera model is not actually displayed in the virtual environment, i.e., the camera model is not displayed in the virtual environment screen displayed by the user interface.
Describing the example that the camera model is located at any position at a preset distance from the virtual character, optionally, one virtual character corresponds to one camera model, and the camera model may rotate with the virtual character as a rotation center, for example: the camera model is rotated by taking any point of the virtual character as a rotation center, the camera model not only rotates in angle, but also shifts in displacement in the rotation process, and the distance between the camera model and the rotation center is kept unchanged during rotation, namely, the camera model is rotated on the surface of a sphere taking the rotation center as a sphere center, wherein the any point of the virtual character can be any point of the head, the trunk or the periphery of the virtual character, and the embodiment of the application is not limited. Optionally, when the camera model observes the virtual character, the center of the view angle of the camera model points in the direction of the center of sphere, where the point of the sphere where the camera model is located points.
Optionally, the camera model may also observe the virtual character at a preset angle in different directions of the virtual character.
Schematically, referring to fig. 4, a point is defined in the virtual character 11 as a rotation center 12, and the camera model is rotated around the rotation center 12, optionally, the camera model is configured with an initial position, which is a position behind and above the virtual character (such as a back position of the brain). Schematically, as shown in fig. 4, the initial position is position 13, and when the camera model is rotated to position 14 or position 15, the viewing angle direction of the camera model is changed with the rotation of the camera model.
Optionally, the virtual environment displayed by the virtual environment screen includes: at least one element selected from the group consisting of ladders, straight ladders, rock climbing areas, mountains, plains, rivers, lakes, oceans, deserts, marshes, quicksand, sky, plants, buildings and vehicles.
In an alternative scenario, the first virtual character is surrounded by a plurality of second virtual characters that attack the first virtual character simultaneously.
Illustratively, as shown in fig. 5, in a virtual environment screen 601 obtained at a first person perspective of a first virtual character, a hand 602 of the first virtual character, and two second virtual characters 603 are included.
Stunning skills are skills possessed by the first virtual character. Illustratively, stunning skills are a passive skill, i.e., a skill that will automatically be used when the skill-use conditions are met. Illustratively, the stunning skill is a skill set by the first virtual character before entering the game and brought into the game, or the stunning skill is a skill acquired by the first virtual character in the game. Illustratively, the stunning skills are those obtained by purchasing a first virtual character using gold after entering a game.
Illustratively, the stunning skill is a skill that acts on the second virtual character, which may produce a stunning effect on the second virtual character. The effect of the stunning effect comprises: stunning, reducing a state value (a life value, a moving speed, an attack force, a defending force, a recovery speed, etc.) of the second virtual character. Stunning refers to restricting movement of the second virtual character and attack. Illustratively, stunning refers to bringing the second avatar to a state that is not movable in place and is not attacked.
In response to the number of defeats of the first virtual character meeting the number threshold, the first virtual character is controlled to use the stunning skill, the number of defeats being the number of defeats of the second virtual character by the first virtual character, step 302.
Illustratively, the second avatar has a life value, the first avatar may attack the second avatar, causing the life value of the second avatar to decrease, and the second avatar dies when the life value of the second avatar is below a threshold (e.g., the threshold is 0). Illustratively, defeating refers to the first virtual character's attack causing the second virtual character's life value to be below a threshold value, then the second virtual character is defeated by the first virtual character when it is, e.g., the second virtual character's life value is 100, the third virtual character attacks the second virtual character causing the second virtual character's life value to drop from 100 to 1, the first virtual character attacks the second virtual character causing the second virtual character's life value to drop from 1 to 0, then the attack considered to be the first virtual character causing the second virtual character's life value to be below the threshold value, and the second virtual character is defeated by the first virtual character.
Illustratively, each time a first virtual character defeats a second virtual character, the defeat of the first virtual character is increased by one.
The number threshold is an arbitrarily set value, and for example, the number threshold may be 10, 30, 50, or the like. When the first virtual character defeats a certain number of second virtual characters, the first virtual character automatically uses stunning skills.
Step 303, controlling the second virtual character located within the action range of the stunning skill to be subjected to the stunning action effect, wherein the action range comprises a region range determined in the virtual environment according to the position of the first virtual character.
After the first virtual character uses the stunning skill, the client acquires the action range of the stunning skill and controls the second virtual character positioned in the action range to receive the stunning action effect.
The action range may be a three-dimensional space range or a two-dimensional plane range, for example. The action range is exemplified by a sphere range or a circle range with a certain distance as a radius, and the sphere range is centered on the position of the first virtual character. The action range may be an annular range with a certain distance as an inner diameter and a certain distance as an outer diameter, with the position of the first virtual character as a center. The action range may be a sector range facing the first virtual character with the position of the first virtual character as the vertex.
In an alternative embodiment, after the first virtual character uses the stunning skill, the client obtains a position of each second virtual character in the virtual environment, calculates a distance between each second virtual character and the first virtual character, and determines the second virtual character with a distance smaller than a threshold value as a second virtual character within an action range.
Illustratively, the stunning effect corresponds to: injury value, stunning duration. The client reduces the life value of the second virtual character by the injury value, controls the second virtual character to be stunned, and continues the stunned state until the stunned duration is finished. Illustratively, the stunning effect may further include: and reducing the defending value and the moving speed of the second virtual character, and clearing at least one of the anger value of the second virtual character.
In summary, according to the method provided by the embodiment, the stunning skills are set for the first virtual character, after the first virtual character defeats a certain number of zombies (the second virtual character), the first virtual character is automatically controlled to use the stunning skills to stun the zombies nearby the first virtual character, so that the first virtual character can limit the activities of surrounding zombies, and further, the zombies are quickly separated from the surrounding rings of the zombies when the zombies are stunned and cannot be activated. The stunning skills are the skills (passive skills) automatically used when the skill using conditions are met, the skills can be triggered without additional operation by a user, the zombies near the stunning are stunned, the user operation is simplified, and the man-machine interaction efficiency of the user for controlling the first virtual character getting rid of the trouble operation is improved.
Illustratively, in an alternative embodiment, the stunning skills are skills that the first virtual character purchased after entering the game. Illustratively, the stunning skills are randomly triggered skills.
Fig. 6 is a method flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution body of the method is exemplified by a client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, step 301 further includes step 401 to step 402, step 302 further includes step 3021 to step 3023, and step 501 further includes step 303, based on the exemplary embodiment shown in fig. 3.
In response to the distance of the first virtual character from the target object being less than the distance threshold, a skill purchase interface is displayed for purchasing stunning skills, the target object being a virtual object controlled by the server, step 401.
Illustratively, the first virtual character defeats the second virtual character and a corresponding number of virtual items are available. Exemplary, virtual items are used to purchase skills, weapons, equipment, vehicles, open maps, unlock new areas, etc. within the current pair. By way of example, the virtual article may be a gold coin, a coupon, a game coin, a shell, a diamond, or the like. Illustratively, a first virtual character defeats a second virtual character to obtain 10 gold coins. For example, as shown in fig. 5, in the upper left corner of the virtual environment screen 601, an information field 604 of the first virtual character is displayed, and the number of medals of the first virtual character is recorded in the information field, and the number of medals is increased by 10 for each click of the second virtual character by the first virtual character.
The first avatar may also obtain virtual items for other avatars by, for example, clicking on other client-controlled avatars, or may obtain virtual items by, for example, selling skills, weapons, equipment, vehicles, or may obtain virtual items by, for example, completing a designated task.
For example, the first virtual character may purchase stunning skills using a virtual item. Illustratively, a target object is provided in the virtual environment. The target object is for purchasing skills. Illustratively, the target object may be at least one of a vending machine, a store, and an NPC (Non-Player Character) set in the virtual environment. The target object is illustratively an object that is stationary in the virtual environment, or an object that is movable only to a small extent. Illustratively, the target object has a three-dimensional virtual model in the virtual environment, e.g., as shown in FIG. 7, with the target object 605 in the virtual environment. When the first avatar approaches the target object, the client displays a skill purchase interface. The skill purchase interface, for example, includes a purchase control that, when triggered by a user, controls the first virtual character to purchase a corresponding item. For example, as shown in fig. 8, the first avatar approaches the target object 605, at this time, a purchase interface is displayed, a purchase control 606 is included in the purchase interface, the user triggers the purchase control 606 to purchase the skill "shock cherry", and after the user purchases the "shock cherry", as shown in fig. 9, an icon corresponding to the "shock cherry" is displayed in the skill field 607 to inform the user that the first avatar currently has the skill "shock cherry".
Illustratively, the client will mark the location of the target object on a minimap of the virtual environment so that the user can find the target object from the minimap and purchase skills. For example, as shown in fig. 9, a minimap of the virtual environment is displayed in the upper right corner of the user interface, the position of the target object is marked with a black triangle 612 in the minimap, and the user can find the target object according to the position of the black triangle 612.
For example, the client may detect in real-time whether the distance of the first virtual character from the target object is less than a distance threshold to determine whether the first virtual character is near the target object.
The client may also set a collision detection model on the target object, with the collision detection model detecting whether the first virtual character is close to the target object.
Illustratively, the target object is provided with a collision detection model for detecting a distance of the first virtual character from the target object. A skill purchase interface is displayed in response to the three-dimensional virtual model of the first virtual character colliding with the collision detection model.
The collision detection model is a three-dimensional box disposed on a three-dimensional virtual model of the target object. Illustratively, the collision detection model is a box (collision box) that is not visible, i.e., the collision detection model is not visible to the user in the virtual environment screen. Illustratively, the size and shape of the collision detection model is set according to the size and shape of the three-dimensional virtual model of the target object. For example, the size and shape of the collision detection model is the same as the size and shape of the three-dimensional virtual model of the target object. Or, the size of the collision detection model is slightly smaller than the size of the three-dimensional virtual model of the target object. Or the size of the collision detection model is slightly larger than the size of the three-dimensional virtual model of the target object, so that the collision detection model wraps the target object.
For example, in order to simplify the operation process, the collision detection model is generally set to a regular shape, for example, a square, a rectangular parallelepiped, a sphere, a cone, a cylinder, or the like. Illustratively, the collision detection model is configured to be slightly larger than the three-dimensional virtual model of the target object such that the collision detection model wraps around the target object.
For example, the collision detection model may detect a collision in the virtual environment, and when other virtual models collide with the surface of the collision detection model, the collision detection model may generate collision information, where the collision information includes: at least one of information of the virtual model, collision point, and collision time. The information of the virtual model includes: at least one of a type of the virtual model, a size of the virtual model, a material of the virtual model, an account number of the virtual character, and a state of the virtual character. Illustratively, the virtual character has a three-dimensional virtual model in the virtual environment, and when the virtual model of the virtual character collides with the collision detection model, the collision detection model acquires collision information, and the virtual character is determined to be close to the target object according to the collision information.
For example, as shown in fig. 10, a collision detection model 608 slightly larger than the target object 605 is provided on the target object 605.
Step 402, in response to the purchase operation, reducing the number of virtual items owned by the first virtual character, wherein the virtual items are obtained by defeating the second virtual character, to control the first virtual character to obtain stunning skills.
For example, the purchase operation may be an operation by which the user triggers a UI control on the purchase interface. Alternatively, the purchase operation may be an operation in which the user inputs an instruction using an input device such as a mouse, a keyboard, a microphone, or a camera.
The client determines the skill of the user to purchase according to the purchase operation of the user, acquires the number of virtual articles required by the skill, correspondingly reduces the number of virtual articles of the first virtual character, and controls the first virtual character to acquire the skill. For example, if the number of virtual items of the first virtual character is insufficient to pay for the skill, the user is prompted that the number of virtual items is insufficient.
For example, as shown in fig. 8, the virtual object required for purchasing the skill "shock cherry" is 2000 gold coins, and the first virtual character currently has 9410 gold coins, the client reduces the number of gold coins of the first virtual character by 2000, so that the first virtual character obtains the skill "shock cherry".
In response to the number of defeats of the first virtual character meeting the number threshold, a determination is made as to whether to trigger stunning skills based on the first probability, step 3021.
Illustratively, the stunning skills are those that are randomly triggered after the skill usage conditions are met, i.e., the number of defeats of the first virtual character meeting the number threshold does not necessarily control the first virtual character to use the stunning skills, but rather there is a probability that the first virtual character will be triggered to use the stunning skills.
For example, the first probability may be 80%, i.e., when the number of defeats of the first avatar meets the number threshold, there is an 80% likelihood that the first avatar will be triggered to use the stunning skills.
Illustratively, when teammates are present around the first virtual character, the effect of the stunning skills may also be changed according to the number of teammates. As shown in fig. 11, step 3021 further includes steps 3021-1 to 3021-3, and step 303 further includes steps 3031 and 3032.
In response to the number of defeats of the first avatar satisfying the number threshold, a number of target avatars that are within a first range, the first range being an area range determined in the virtual environment based on the location of the first avatar, is obtained 3021-1.
For example, when the number of defeats of the first virtual character satisfies the number threshold, the client may obtain the number of target virtual characters that are within the first range. The target avatar may be an avatar that is co-camping with the first avatar. The target avatar may also be a second avatar. That is, the client may determine the probability of triggering the stunning skills according to the number of teammates around the first virtual character, or may determine the probability of triggering the stunning skills according to the number of zombies around the first virtual character. The target avatar may also be, for example, the number of teammates being attacked by the second avatar, or the target avatar may also be a avatar controlled by the client.
The first range may be a spherical range or a circular range with a certain distance as a radius, centered on the position of the first virtual character. Illustratively, the first range may be the same as or different from the range of action of the stunning skills.
In response to the number of target avatars not meeting the threshold, a determination is made as to whether to trigger stunning skills based on the first probability, step 3021-2.
Illustratively, if the number of teammates on the first virtual character is small, determining whether to trigger stunning skills based on the first probability. The threshold may be any value, for example, the threshold may be 0, i.e., when there are no teammates on the first avatar's side, a first probability is used to determine whether to trigger stunning skills.
Step 3021-3, in response to the number of target avatars meeting a threshold, determining whether to trigger stunning skills based on a second probability, the second probability not being equal to the first probability.
Illustratively, if the number of teammates on the first virtual character is greater, determining whether to trigger stunning skills based on the second probability. For example, the threshold may be 0, i.e., when there are teammates around the first virtual character, with a second probability, determining whether to trigger stunning skills. For example, the first probability may be greater than the second probability, or the first probability may be less than the second probability.
For example, when the first virtual character has teammates, the first virtual character may trigger stunning skills with a greater probability.
In response to triggering the stunning skills, the first virtual character is controlled to use the stunning skills, step 3022.
And the client determines whether to trigger the stunning skill according to the first probability, and if so, controls the first virtual character to use the stunning skill.
Step 3023, in response to not triggering stunning skills, setting the defeat number of the first virtual character to zero.
The client determines whether the stunning skill is triggered according to the first probability, if the stunning skill is not triggered, the first virtual character is not controlled to use the stunning skill, the number of defeats of the first virtual character is cleared, the number of defeats of the first virtual character is recalculated, and when the number of defeats meets the number threshold again, whether the stunning skill is triggered is judged again.
Step 3031, controlling a second virtual character located within the scope of action of the stunning skill to be subjected to the first stunning effect in response to the number of target virtual characters not meeting the threshold.
Step 3032, controlling a second virtual character located within the scope of action of the stunning skill to be subjected to a second stunning action effect in response to the target number of virtual characters meeting the threshold.
Wherein the first and second stunning effects correspond to different maximum duration of action or injury values.
For example, a circle 610 with a radius R is centered around a first virtual character as an action range, a circle area with a radius of a certain distance is centered around a first virtual character 609, as shown in fig. 12, the circle 610 is an action range, three second virtual characters 603 located in the circle 610 are subjected to a first stunning effect of a stunning skill, and two second virtual characters 603 located outside the circle 610 are not subjected to the first stunning effect of the stunning skill.
Illustratively, the effect of the stunning skill has an effect duration. For example, the stunning effect corresponds to a maximum duration of effect. After the second virtual character is subjected to the stunning effect, the action duration of the action effect of the second virtual character by the stunning when the client accounts; and stopping the stunning effect of the second virtual character in response to the action duration reaching the maximum action duration. That is, the stunning skill will stun only the second virtual character for a period of time, after which the second virtual character will return to normal.
Illustratively, when no teammates are present at the first avatar, the stunning skills used by the first avatar will cause a first stunning effect on the second avatar at the first avatar. When teammates exist on the first virtual character, the stunning skill is triggered, so that the stunning effect of the stunning skill can be increased, and the stunning effect of the stunning skill is increased from the first stunning effect to the second stunning effect. Illustratively, an increased stunning effect refers to an increased injury value or an increased maximum duration of action of the stunning.
For example, if no teammate is present in the first avatar, the stunning skill triggered at this time will cause 100 points of injury to the second avatar and stun the second avatar 10s. If teammates exist beside the first virtual character, the triggered stunning skill can cause 300-point injury to the second virtual character and stunning the second virtual character for 15s.
Illustratively, the second virtual character is controlled to be subjected to a third stunning effect in response to the second virtual character being within the range of action of the stunning skills being hit by at least two stunning skills. When a plurality of virtual characters stun a second virtual character using stun skills, the effect of the stun effect suffered by the second virtual character increases. For example, where both the first and third virtual characters stunned the second virtual character using stunning skills, the second virtual character would experience twice or three times the effect of the first stunning action. Doubling the effect of the stunning action means: at least one of the injury value doubling and the maximum duration of action of the stunned human being doubling. Illustratively, the third stunning effect is a doubled stunning effect.
For example, one virtual character may cause 100 points of injury to a second virtual character and stun for 10s using stun skills, and two virtual characters may act on the same second virtual character using stun skills, then the second virtual character is subjected to 400 points of injury and stun for 40s.
Step 501, displaying a stunning special effect corresponding to the second virtual character, wherein the stunning special effect comprises at least one of displaying a stunning sign or a text prompt on the head of the second virtual character, distinguishing the second virtual character, and controlling the second virtual character to fall down.
Illustratively, the stunned second virtual character will display a stunned special effect. The stunning special effect is used to inform the user that the second virtual character has been stunned, and the user can disregard the second virtual character from escaping or defeating the second virtual character while in motion.
Illustratively, the stunning effect may be a stunning sign, text prompt, or a special effect light column displayed behind the second avatar, or a distinguishing display of the second avatar (e.g., highlighting, changing color of a three-dimensional virtual model, enhancing a tracing, etc.), or a display of the second avatar lying on the ground, etc.
For example, as shown in FIG. 13, the head of the stunned second avatar 603 will display a stunned special effect 611, prompting the user that the and both avatars have been stunned.
In summary, according to the method provided by the embodiment, when the first virtual character kills a certain number of zombies, the first virtual character is automatically controlled to use the stunning skills to stunning the zombies, and when the first virtual character is surrounded by the zombies, the user only needs to kill a certain number of zombies to trigger the stunning skills, so that the stunning zombies can be conveniently and quickly escaped, the user operation is simplified, and the man-machine interaction efficiency of the user for controlling the first virtual character getting rid of the operation is improved.
According to the method provided by the embodiment, after each time the first virtual character kills the zombies, whether the number of the zombies hit by the first virtual character reaches the threshold value is judged, whether the first virtual character triggers the stunning skills is judged, so that the first virtual character is automatically controlled to use the stunning skills, the user operation is simplified, and the man-machine interaction efficiency of the user for controlling the first virtual character to get rid of poverty is improved.
According to the method provided by the embodiment, the stunning skills are randomly used after the skill using conditions are met, and the stunning skills are not necessarily triggered to be used by the first virtual character after the skill using conditions are met, so that the frequency of the stunning skills used by the first virtual character is reduced, the use of the stunning skills is unpredictable, the office variability and the unknowing are increased, the office intensity is improved, the office time is reduced, and the server load is lightened.
According to the method provided by the embodiment, the first virtual character is enabled to kill the zombie to obtain the gold coin, and the first virtual character is enabled to purchase the stunning skills by using the gold coin, so that the acquisition of the stunning skills is limited. By setting the skill vending machine, the first virtual character uses gold coin purchasing and stunning skills in the skill vending machine, and when the first virtual character approaches the skill vending machine, a skill purchasing interface is displayed, so that a user purchases the desired skills on the skill purchasing interface, the operation of purchasing the skills by the user is simplified, and the man-machine interaction efficiency is improved.
According to the method provided by the embodiment, whether the first virtual character is close to the skill vending machine is detected by arranging the collision detection model on the skill vending machine, so that a skill purchasing interface is automatically displayed, the skill purchasing operation of a user is simplified, and the man-machine interaction efficiency is improved.
According to the method provided by the embodiment, the stunned zombie special effect is displayed on the stunned zombie, so that a user can conveniently and quickly identify the stunned zombie.
According to the method provided by the embodiment, after the zombie is stunned for a period of time, the zombie is automatically controlled to recover, and normal running of the game is not affected while the escape time is provided for the first virtual character.
According to the method provided by the embodiment, when teammates exist at the side of the first virtual character, the probability of triggering the stunning skills is increased, the action effect of the stunning skills is increased, a plurality of virtual characters can conveniently escape from the enclosure of the zombie, the operation that the user controls the virtual characters to escape from the enclosure is simplified, and the man-machine interaction efficiency is improved.
Exemplary embodiments of a control method for using the virtual character provided in the present application in a first person shooter game are presented.
Fig. 14 is a method flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, which is a client supporting a virtual environment. The method comprises the following steps.
Step 701, entering into zombie mode.
For example, a plurality of game modes are provided for users in the first-person shooting game, wherein the zombie modes are provided with zombies for automatically attacking the virtual characters, and the users need to control the virtual characters to attack and kill the zombies so as to ensure that the virtual characters survive to obtain victory.
Step 702, determining whether the virtual character strikes a zombie. If the click is made, go to step 703; otherwise, step 701 is performed.
In step 703, the avatar obtains the gold.
When the virtual character kills the zombie, the virtual character can correspondingly acquire the gold coin.
Step 704, judging whether the virtual character approaches the skill machine, if so, proceeding to step 705, otherwise proceeding to step 703.
Step 705, displaying the gold coin and purchase button required for the purchase skill.
Step 706, judging whether the virtual character kills the zombie, if so, executing step 707, otherwise executing step 705.
Step 707, death of the zombie.
Illustratively, when a virtual character kills a zombie, the client correspondingly records the number of defeats of the virtual character.
Step 708, it is determined whether or not to trigger the stunning skill, if so, step 709 is performed, otherwise step 707 is performed.
And the client judges whether the defeat number of the virtual character meets a number threshold, and if so, determines whether to trigger the stunning skill according to the probability.
In step 709, control dead zombies are stunned and the player is attacked.
The virtual character uses stunning skills to stun a zombie within the scope of action.
Step 710, judging whether the stunning time is over, if yes, proceeding to step 711; otherwise, step 709 is performed.
In step 711, the zombie is controlled to return to normal.
When the stunning time is over, the client controls the zombie to return to normal.
The embodiment also provides an interaction method between the client and the server when the virtual character obtains the gold coin and uses the gold coin. As shown in fig. 15, a method flowchart of a virtual character control method according to an exemplary embodiment of the present application is provided. The method comprises the following steps:
in step 801, a client controls a virtual character to attack a zombie.
Step 802, the client reports an injury value of the virtual character attack zombie to the server.
Step 803, the server checks the damage value and determines that the damage is legal.
In step 804, when the injury is legal, the server returns a kill success message to the client.
In step 805, after receiving the successful click-killing message sent by the server, the client controls death of the zombie.
Step 806, the server increases the number of medals obtained by the virtual character to kill the zombie.
The server sends a protocol to the client informing the client of the updated gold number, step 807.
Step 808, the client increases the number of gold coins of the virtual character.
In step 809, the client receives an instruction from the user to purchase a weapon or unlock an area using gold.
The client sends a protocol using gold coin to the server, step 810.
In step 811, the server verifies the validity of the current transaction.
After determining that the transaction is legitimate, the server returns a usable gold protocol to the client, step 812.
Step 813, the client determines that the transaction is successful.
In summary, according to the method provided by the embodiment, the virtual character can obtain the gold coin by killing the zombies, the virtual character can purchase the skills with the gold coin on the skill machine, the stunning skills are obtained after the purchase is successful, when the virtual character kills a certain number of zombies, the virtual character is controlled to automatically use the stunning skills, when the virtual character is surrounded by a large number of zombies, the zombies can be stunning by using the stunning skills, and the user can control the virtual character to escape from the surrounding of the zombies.
According to the method provided by the embodiment, when the client side kills the zombie, the server checks the damage of the virtual character and determines whether to kill the zombie. After the virtual character kills the zombie, the server sends a command of increasing the gold coins to the client, and when the client needs to use the gold coins, the server checks the validity of the transaction, so that the accuracy of the number of the gold coins of the virtual character is ensured.
The following are device embodiments of the present application, reference being made to the above-described method embodiments for details of the device embodiments that are not described in detail.
Fig. 16 is a block diagram of a control apparatus for virtual characters provided in an exemplary embodiment of the present application. The device comprises:
the display module 901 is configured to display a virtual environment screen, where the virtual environment screen includes: a first virtual character and at least two second virtual characters located in a virtual environment, the first virtual character possessing stunning skills;
a control module 902 for controlling the first virtual character to use the stunning skill in response to a number of defeats of the first virtual character satisfying a number threshold, the number of defeats being a number of defeats of the second virtual character by the first virtual character;
the control module 902 is further configured to control the second virtual character located within an action range of the stunning skill to be subjected to a stunning action effect, where the action range includes a region range determined in the virtual environment according to a position of the first virtual character.
In an alternative embodiment, the apparatus further comprises:
a determining module 903, configured to determine, according to a first probability, whether to trigger the stunning skill in response to the number of defeats of the first virtual character meeting a number threshold;
the control module 902 is further configured to control the first virtual character to use the stunning skill in response to triggering the stunning skill.
In an alternative embodiment, the apparatus further comprises:
a recording module 904 for zeroing the defeat number of the first virtual character in response to not triggering the stunning skill.
In an alternative embodiment, the apparatus further comprises:
an obtaining module 905, configured to obtain, in response to the number of defeats of the first virtual character meeting a number threshold, a target number of virtual characters located in a first range, where the first range is a region range determined in the virtual environment according to a position of the first virtual character;
the determining module 903 is further configured to determine, according to a first probability, whether to trigger the stunning skill in response to the number of target virtual characters not meeting a threshold.
In an alternative embodiment, the determining module 903 is further configured to determine, in response to the number of target avatars meeting a threshold, whether to trigger the stunning skill according to a second probability, where the second probability is not equal to the first probability.
In an alternative embodiment, the control module 902 is further configured to control the second avatar within the scope of action of the stunning skill to be subjected to a first stunning effect in response to the target number of avatars not meeting the threshold;
The control module 902 is further configured to control, in response to the number of target virtual characters meeting the threshold, that the second virtual character located within the action range of the stunning skill is subjected to a second stunning effect;
wherein the first and second stunning effects correspond to different maximum duration of action or injury values.
In an alternative embodiment, the apparatus further comprises:
a purchase module 906 for reducing the number of virtual items owned by the first virtual character in response to a purchase operation, controlling the first virtual character to obtain the stunning skills;
wherein the virtual article is obtained by defeating the second virtual character by the first virtual character.
In an alternative embodiment, the display module 901 is further configured to display a skill purchase interface for purchasing the stunning skill in response to the distance of the first virtual character from a target object that is a virtual object controlled by a server being less than a distance threshold.
In an alternative embodiment, a collision detection model is arranged on the target object, and the collision detection model is used for detecting the distance between the first virtual character and the target object;
The display module 901 is further configured to display the skill purchasing interface in response to a collision between the three-dimensional virtual model of the first virtual character and the collision detection model.
In an optional embodiment, the display module 901 is further configured to display a stunning special effect corresponding to the second virtual character, where the stunning special effect includes at least one of displaying a stunning sign or a text prompt on the head of the second virtual character, displaying the second virtual character differently, and controlling the second virtual character to fall over.
In an alternative embodiment, the first stunning effect corresponds to a maximum duration of effect, and the apparatus further comprises:
a timing module 907, configured to time an action duration of the second virtual character subjected to the stunning action effect;
the control module 902 is further configured to stop the stunning effect of the second virtual character in response to the duration of action reaching the maximum duration of action.
In an alternative embodiment, the control module 902 is further configured to control the second virtual character to be subjected to a third stunning effect in response to the second virtual character being within the scope of action of the stunning skills being hit by at least two of the stunning skills.
It should be noted that: the virtual character control device provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the control device for the virtual character provided in the above embodiment and the control method embodiment for the virtual character belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Fig. 17 shows a block diagram of a terminal 2000 according to an exemplary embodiment of the present application. The terminal 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 2000 includes: a processor 2001 and a memory 2002.
Processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 2001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 2001 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2001 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 2001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 2002 may include one or more computer-readable storage media, which may be non-transitory. Memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the virtual character control methods provided by the method embodiments herein.
In some embodiments, the terminal 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002, and peripheral interface 2003 may be connected by a bus or signal line. The respective peripheral devices may be connected to the peripheral device interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2004, a touch display 2005, a camera 2006, audio circuitry 2007, and a power supply 2008.
Peripheral interface 2003 may be used to connect I/O (Input/Output) related at least one peripheral device to processor 2001 and memory 2002. In some embodiments, processor 2001, memory 2002, and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2001, memory 2002, and peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 2004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 2004 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited in this application.
The display 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2005 is a touch display, the display 2005 also has the ability to capture touch signals at or above the surface of the display 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 2005 may be one, providing a front panel of the terminal 2000; in other embodiments, the display 2005 may be at least two, respectively disposed on different surfaces of the terminal 2000 or in a folded design; in still other embodiments, the display 2005 may be a flexible display disposed on a curved surface or a folded surface of the terminal 2000. Even more, the display 2005 may be arranged in an irregular pattern that is not rectangular, i.e., a shaped screen. The display 2005 can be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 2006 is used to capture images or video. Optionally, the camera assembly 2006 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 2006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing, or inputting the electric signals to the radio frequency circuit 2004 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 2000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 2007 may also include a headphone jack.
Power supply 2008 is used to power the various components in terminal 2000. The power source 2008 may be alternating current, direct current, disposable battery, or rechargeable battery. When power supply 2008 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2000 can further include one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, optical sensor 2014, and proximity sensor 2015.
The acceleration sensor 2011 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 2000. For example, the acceleration sensor 2011 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 2001 may control the touch display 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 2012 may detect a body direction and a rotation angle of the terminal 2000, and the gyro sensor 2012 may cooperate with the acceleration sensor 2011 to collect a 3D motion of the user to the terminal 2000. The processor 2001 may implement the following functions based on the data collected by the gyro sensor 2012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 2013 may be disposed at a side frame of terminal 2000 and/or at a lower layer of touch display 2005. When the pressure sensor 2013 is disposed at a side frame of the terminal 2000, a grip signal of the user to the terminal 2000 may be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at the lower layer of the touch display 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 2005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 2014 is used to collect the ambient light intensity. In one embodiment, the processor 2001 may control the display brightness of the touch display 2005 based on the ambient light intensity collected by the optical sensor 2014. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display 2005 is turned up; when the ambient light intensity is low, the display brightness of the touch display 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 based on the ambient light intensity collected by the optical sensor 2014.
A proximity sensor 2015, also referred to as a distance sensor, is typically provided on the front panel of the terminal 2000. The proximity sensor 2015 is used to collect a distance between a user and the front of the terminal 2000. In one embodiment, when the proximity sensor 2015 detects that the distance between the user and the front surface of the terminal 2000 gradually decreases, the processor 2001 controls the touch display 2005 to switch from the bright screen state to the off screen state; when the proximity sensor 2015 detects that the distance between the user and the front surface of the terminal 2000 gradually increases, the processor 2001 controls the touch display 2005 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 17 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The application also provides a computer device, which comprises a processor and a memory, wherein at least one instruction, at least one section of program, code set or instruction set is stored in the memory, and the at least one instruction, the at least one section of program, the code set or instruction set is loaded and executed by the processor to realize the control method of the virtual character provided by any of the above exemplary embodiments.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, which is loaded and executed by the processor to implement the virtual character control method provided in any of the above-described exemplary embodiments.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the control method of the virtual machine role provided in the above-mentioned alternative implementation.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (12)

1. A method for controlling a virtual character, the method comprising:
displaying a virtual environment screen, the virtual environment screen comprising: a first virtual character and at least two second virtual characters located in a virtual environment, the first virtual character possessing stunning skills;
when the defeat number of the first virtual character meets a number threshold, acquiring the number of target virtual characters in a first range, wherein the first range is a region range determined in the virtual environment according to the position of the first virtual character; the defeat number is the number of defeats of the second virtual character by the first virtual character; the target virtual character comprises at least one of a teammate virtual character co-camping with the first virtual character, the second virtual character, the teammate virtual character being attacked by the second virtual character and a virtual character controlled by a client;
Determining, in response to the number of target virtual characters not meeting a threshold, whether to automatically trigger the stunning skills according to a first probability;
determining, in response to the number of target avatars meeting the threshold, whether to automatically trigger the stunning skill according to a second probability, the second probability being unequal to the first probability;
controlling the first virtual character to use the stunning skills in response to triggering the stunning skills;
controlling the second virtual character within the scope of action of the stunning skill to be subjected to a stunning action effect, the scope of action comprising a region scope determined in the virtual environment according to the position of the first virtual character.
2. The method according to claim 1, wherein the method further comprises:
in response to not triggering the stunning skill, the number of defeats of the first virtual character is zeroed.
3. The method of claim 1, wherein said controlling the second virtual character within the scope of action of the stunning skill to be subjected to a stunning action effect comprises:
controlling the second virtual character within the scope of action of the stunning skill to be subjected to a first stunning action effect in response to the target number of virtual characters not meeting the threshold;
Controlling the second virtual character within the scope of action of the stunning skill to be subjected to a second stunning action effect in response to the target number of virtual characters meeting the threshold;
wherein the first and second stunning effects correspond to different maximum duration of action or injury values.
4. A method according to any one of claims 1 to 3, wherein the method further comprises:
in response to a purchase operation, reducing the number of virtual items owned by the first virtual character, controlling the first virtual character to obtain the stunning skills;
wherein the virtual article is obtained by defeating the second virtual character by the first virtual character.
5. The method of claim 4, wherein the reducing the number of virtual items owned by the first virtual character in response to a purchase operation, prior to controlling the first virtual character to obtain the stunning skills, further comprises:
and in response to the distance of the first virtual character from a target object being a virtual object controlled by a server, displaying a skill purchase interface for purchasing the stunning skill.
6. The method according to claim 5, wherein a collision detection model is provided on the target object, the collision detection model being used for detecting a distance between the first virtual character and the target object;
the method further includes, in response to the distance of the first virtual character from the target object being less than a distance threshold, displaying a skill purchase interface comprising:
and displaying the skill purchase interface in response to the three-dimensional virtual model of the first virtual character colliding with the collision detection model.
7. A method according to any one of claims 1 to 3, wherein said controlling said second virtual character within the scope of action of said stunning skills after being subjected to a stunning effect further comprises:
displaying the stunning special effect corresponding to the second virtual character, wherein the stunning special effect comprises at least one of displaying a stunning sign or a text prompt on the head of the second virtual character, distinguishing and displaying the second virtual character and controlling the second virtual character to fall down.
8. A method according to any one of claims 1 to 3, wherein the stunning effect corresponds to a maximum duration of effect, the method further comprising:
Timing the action duration of the second virtual character subjected to the stunning action effect;
and stopping the stunning effect of the second virtual character in response to the duration of action reaching the maximum duration of action.
9. A method according to any one of claims 1 to 3, wherein said controlling said second virtual character within the scope of action of said stunning skills to be subjected to a stunning effect comprises:
controlling the second virtual character to be subjected to a third stunning effect in response to the second virtual character being within the range of action of the stunning skills being hit by at least two of the stunning skills.
10. A virtual character control apparatus, the apparatus comprising:
the display module is used for displaying a virtual environment picture, and the virtual environment picture comprises: a first virtual character and at least two second virtual characters located in a virtual environment, the first virtual character possessing stunning skills;
the control module is used for acquiring the number of target virtual roles in a first range when the defeat number of the first virtual role meets a number threshold, wherein the first range is a region range determined in the virtual environment according to the position of the first virtual role; the defeat number is the number of defeats of the second virtual character by the first virtual character; the target virtual character comprises at least one of a teammate virtual character co-camping with the first virtual character, the second virtual character, the teammate virtual character being attacked by the second virtual character and a virtual character controlled by a client;
Determining, in response to the number of target virtual characters not meeting a threshold, whether to automatically trigger the stunning skills according to a first probability;
determining, in response to the number of target avatars meeting the threshold, whether to automatically trigger the stunning skill according to a second probability, the second probability being unequal to the first probability;
controlling the first virtual character to use the stunning skills in response to triggering the stunning skills;
the control module is further configured to control the second virtual character located within an action range of the stunning skill to receive a first stunning action effect, where the action range includes a region range determined in the virtual environment according to a position of the first virtual character.
11. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the method of controlling a virtual character according to any one of claims 1 to 9.
12. A computer-readable storage medium, wherein at least one program is stored in the readable storage medium, the at least one program being loaded and executed by a processor to implement the method of controlling a virtual character according to any one of claims 1 to 9.
CN202010589764.4A 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium Active CN111589144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010589764.4A CN111589144B (en) 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010589764.4A CN111589144B (en) 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111589144A CN111589144A (en) 2020-08-28
CN111589144B true CN111589144B (en) 2023-05-16

Family

ID=72189058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010589764.4A Active CN111589144B (en) 2020-06-24 2020-06-24 Virtual character control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111589144B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112774204B (en) * 2021-01-22 2023-10-20 北京字跳网络技术有限公司 Role collision avoidance method, device, equipment and storage medium
CN113476825A (en) * 2021-07-23 2021-10-08 网易(杭州)网络有限公司 Role control method, role control device, equipment and medium in game
CN113713373A (en) * 2021-08-27 2021-11-30 网易(杭州)网络有限公司 Information processing method and device in game, electronic equipment and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7775882B2 (en) * 2006-06-12 2010-08-17 Kabushiki Kaisha Sega Game apparatus for changing a visual point position of a virtual camera in conjunction with an attack by and enemy character
CN111265872B (en) * 2020-01-15 2021-08-10 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN111298440A (en) * 2020-01-20 2020-06-19 腾讯科技(深圳)有限公司 Virtual role control method, device, equipment and medium in virtual environment

Also Published As

Publication number Publication date
CN111589144A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
JP7395600B2 (en) Presentation information transmission method, presentation information display method, presentation information transmission device, presentation information display device, terminal, and computer program for multiplayer online battle program
CN110433488B (en) Virtual character-based fight control method, device, equipment and medium
CN111249730B (en) Virtual object control method, device, equipment and readable storage medium
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN110665230B (en) Virtual role control method, device, equipment and medium in virtual world
CN111589144B (en) Virtual character control method, device, equipment and medium
CN110755841A (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN111659119B (en) Virtual object control method, device, equipment and storage medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN110465083B (en) Map area control method, apparatus, device and medium in virtual environment
CN111672099A (en) Information display method, device, equipment and storage medium in virtual scene
WO2021159795A1 (en) Method and apparatus for skill aiming in three-dimensional virtual environment, device and storage medium
CN111389005B (en) Virtual object control method, device, equipment and storage medium
CN113117331B (en) Message sending method, device, terminal and medium in multi-person online battle program
CN110478904B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN112569607B (en) Display method, device, equipment and medium for pre-purchased prop
CN111589139A (en) Virtual object display method and device, computer equipment and storage medium
CN110448905B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN111330277A (en) Virtual object control method, device, equipment and storage medium
CN113101656B (en) Virtual object control method, device, terminal and storage medium
CN112221135B (en) Picture display method, device, equipment and storage medium
CN112076468B (en) Virtual environment picture display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027318

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant