CN110681156B - Virtual role control method, device, equipment and storage medium in virtual world - Google Patents

Virtual role control method, device, equipment and storage medium in virtual world Download PDF

Info

Publication number
CN110681156B
CN110681156B CN201910959897.3A CN201910959897A CN110681156B CN 110681156 B CN110681156 B CN 110681156B CN 201910959897 A CN201910959897 A CN 201910959897A CN 110681156 B CN110681156 B CN 110681156B
Authority
CN
China
Prior art keywords
virtual character
virtual
type plane
point
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910959897.3A
Other languages
Chinese (zh)
Other versions
CN110681156A (en
Inventor
仇斌
胡耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910959897.3A priority Critical patent/CN110681156B/en
Publication of CN110681156A publication Critical patent/CN110681156A/en
Application granted granted Critical
Publication of CN110681156B publication Critical patent/CN110681156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character

Abstract

The application discloses a virtual role control method, device, equipment and storage medium in a virtual world, which are applied to the field of computers. The method comprises the following steps: displaying a user interface, wherein the user interface comprises a picture for observing a virtual environment by adopting the visual angle of a first virtual role, the virtual environment comprises the first virtual role and a second virtual role which are positioned on a map and move, and the map comprises a first type plane and a second type plane; controlling the second virtual character to chase the first virtual character on the first type plane; when the first virtual character travels to the second type plane, controlling the second virtual character to cross to the second type plane; wherein the second type plane is a plane that cannot be reached using the travel pattern on the first type plane. The method can solve the problem of low artificial intelligence degree in the related technology.

Description

Virtual role control method, device, equipment and storage medium in virtual world
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a virtual character in a virtual world.
Background
In applications based on a three-dimensional virtual world, such as Role-Playing games (RPG), there are artificial intelligently controlled virtual characters, such as monsters.
In the related art, when a master virtual character controlled by a user enters a pursuit range of a monster, the monster may pursue the master virtual character. The server can obtain the position of the main control virtual character, a pursuit route of the monster is generated according to the position of the main control virtual character and the navigation grid, and the terminal controls the monster to pursue the main control virtual character according to the pursuit route. The navigation grid is a collection of monster walkable planes that the server generates from a model of the three-dimensional virtual world.
When a monster chases the master virtual character, a monster problem may occur when the master virtual character is on a plane that the monster cannot reach. When the position of the main control virtual character cannot be found on the navigation grid, the server cannot generate a chasing route of a monster according to the position of the main control virtual character and the navigation grid, and the monster can be stuck in place and cannot move. The artificial intelligence can not control the virtual role to reach the designated position at will like a user, and the intelligent degree of the artificial intelligence is low.
Disclosure of Invention
The embodiment of the application provides a virtual character control method, a virtual character control device, virtual character control equipment and a storage medium in a virtual world, and the problem that artificial intelligence in the related technology cannot control a virtual character to reach a designated position at will like a user, and the intelligence degree of the artificial intelligence is low can be solved. The technical scheme is as follows:
according to an aspect of the present application, there is provided a virtual character control method in a virtual world, the method including:
displaying a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map and move, and the map comprises a first type plane and a second type plane;
controlling the second virtual character to chase the first virtual character on the first type plane;
when the second virtual character travels to the junction of the first type plane and the second type plane, controlling the second virtual character to cross the second type plane;
wherein the second type plane is a plane that cannot be reached using the travel pattern on the first type plane.
According to another aspect of the present application, there is provided an avatar control apparatus in a virtual world, the apparatus including:
the display module is used for displaying a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map and move, and the map comprises a first type plane and a second type plane;
the control module is used for controlling the second virtual role to chase the first virtual role on the first type plane;
the control module is further configured to control the second avatar to climb over the second type plane when the second avatar travels to a junction between the first type plane and the second type plane;
wherein the second type plane is a plane that cannot be reached using the travel pattern on the first type plane.
According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual character control method in the virtual world as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by the processor to implement the virtual character control method in a virtual world as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the first virtual character is located on the second type plane, the second virtual character can reach the second type plane by controlling the second virtual character to cross the second type plane, and the strange problem caused by the fact that the second virtual character cannot reach the second type plane in a traveling mode on the first type plane is solved. When the first virtual character jumps from the top to the ground, the second virtual character can jump directly from the top to the ground without detouring from the top, so that the second virtual character has the same mobility as the first virtual character. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of an implementation environment provided by an exemplary embodiment of the present application;
fig. 2 is a flowchart of a virtual character control method in a virtual world according to another exemplary embodiment of the present application;
FIG. 3 is a schematic view of a camera model corresponding to a perspective of a virtual character provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a user interface of a virtual character control method in a virtual world provided by an exemplary embodiment of the present application;
FIG. 5 is a flowchart of a method for controlling a virtual character in a virtual world according to an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for controlling a virtual character in a virtual world according to another exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of a user interface of a virtual character control method in a virtual world according to another exemplary embodiment of the present application;
FIG. 8 is a flowchart of a method for controlling a virtual character in a virtual world according to another exemplary embodiment of the present application;
FIG. 9 is a flowchart of a method for controlling a virtual character in a virtual world according to another exemplary embodiment of the present application;
FIG. 10 is a flowchart of a virtual character control method in a virtual world according to another exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for controlling a virtual character in a virtual world according to another exemplary embodiment of the present application;
fig. 12 is a schematic diagram of a virtual character control apparatus in a virtual world according to another exemplary embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are described:
virtual world: is a virtual world that is displayed (or provided) when an application program runs on a terminal. The virtual world may be a simulated world of a real world, a semi-simulated semi-fictional world, or a purely fictional world. The virtual world may be any one of a two-dimensional virtual world, a 2.5-dimensional virtual world, and a three-dimensional virtual world, which is not limited in this embodiment of the present application. The following embodiments are exemplified in the case where the virtual world is a three-dimensional virtual world.
Virtual roles: refers to a movable object in a virtual world. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the three-dimensional virtual world. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual world, and occupies a part of the space in the three-dimensional virtual world.
Massively Multiplayer Online Role-playing game (Massive Multiplayer Online Role-playing game, MMO-RPG): the game is a game in which a user plays a virtual role and moves in a virtual world, and a picture of the virtual world in the game is a picture of observing the virtual world from the perspective of a first virtual role controlled by the user. In the game, a second virtual character which is not controlled by a user exists, the second virtual character can chase the first virtual character, and the first virtual character needs to avoid chase of the second virtual character or kill the second virtual character, so that the first virtual character can survive to win the game.
Artificial Intelligence (AI): is the technical science of researching and developing theories, methods, techniques and application systems for simulating, extending and expanding human intelligence. Artificial intelligence can simulate human thinking patterns to accomplish a job. In the virtual world, artificial intelligence can simulate the way a user controls a virtual character to control the virtual character, for example, controlling the virtual character to walk in the virtual world and attack other virtual characters. Artificial intelligence may refer to programs, algorithms, software that simulate human mental patterns, the execution of which may be a computer system, server, or terminal.
The method provided in the present application may be applied to a virtual reality application program, a three-dimensional map program, a military simulation program, an MMO-RPG, a massively Multiplayer Online three-dimensional Role playing game (Massive Multiplayer Online 3Dimensions play game, MMO-3DRPG), a First-person shooter game (FPS), a Multiplayer Online tactical sports game (MOBA), and the like, and the following embodiments are exemplified by applications in Games.
FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual world. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an MMO-RPG, an MMO-3DRPG, an FPS game, an MOBA game and a multi-player gunfight survival game. The first terminal 120 is a terminal used by a first user who uses the first terminal 120 to control a first virtual character located in a virtual world for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting, throwing, attacking other virtual characters with virtual weapons energetically. Illustratively, the first avatar is a first virtual character, such as a simulated character object or an animated character object.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 in turn including a display module 1421, a control module 1422, and a receiving module 1423. The server 140 is used for providing background services for the application programs supporting the three-dimensional virtual world. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual world. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an MMO-RPG, an MMO-3DRPG, an FPS game, an MOBA game and a multi-player gunfight survival game. The second terminal 160 is a terminal used by the second user, and the second user uses the second terminal 160 to control a second virtual character located in the virtual world to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting, throwing, attacking other virtual characters with virtual weapons energetically. Illustratively, the second avatar is a second virtual character, such as a simulated character object or an animated character object.
Optionally, the first virtual character and the second virtual character are in the same virtual world.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different control system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart illustrating a virtual character control method in a virtual world according to an exemplary embodiment of the present application, which can be applied to the first terminal 120 or the second terminal 160 in the computer system shown in fig. 1 or other terminals in the computer system. The method comprises the following steps:
step 202, displaying a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map and move, and the map comprises a first type plane and a second type plane.
Alternatively, the view for observing the virtual character in the virtual world is a view for observing the virtual world from the perspective of the first virtual character. The perspective refers to an observation angle when the virtual character is observed in the virtual world from the first person perspective or the third person perspective. Optionally, in an embodiment of the present application, the perspective is a perspective when the virtual character is observed by the camera model in the virtual world.
Optionally, the camera model automatically follows the virtual character in the virtual world, that is, when the position of the virtual character in the virtual world changes, the camera model changes while following the position of the virtual character in the virtual world, and the camera model is always within the preset distance range of the virtual character in the virtual world. Optionally, the relative positions of the camera model and the virtual character do not change during the automatic following process.
The camera model refers to a three-dimensional model located around the virtual character in the virtual world, and when the first person perspective is adopted, the camera model is located near the head of the virtual character or at the head of the virtual character; when a third person perspective view is adopted, the camera model can be located behind the virtual character and bound with the virtual character, or located at any position away from the virtual character by a preset distance, the virtual character located in the virtual world can be observed from different angles through the camera model, and optionally, when the third person perspective view is the over-shoulder perspective view of the first person, the camera model is located behind the virtual character (such as the head and the shoulder of the virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be located overhead of the virtual character's head when a top-down view is used, which is a view looking into the virtual world from an overhead top-down view. Optionally, the camera model is not actually displayed in the virtual world, i.e. the camera model is not displayed in the virtual world displayed by the user interface.
To illustrate an example where the camera model is located at any position away from the virtual character by a preset distance, optionally, one virtual character corresponds to one camera model, and the camera model may rotate with the virtual character as a rotation center, for example: the camera model is rotated with any point of the virtual character as a rotation center, the camera model rotates not only angularly but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model rotates on the surface of a sphere with the rotation center as the sphere center, wherein any point of the virtual character can be the head, the trunk or any point around the virtual character, which is not limited in the embodiment of the present application. Optionally, when the virtual character is observed by the camera model, the center of the view angle of the camera model points to the direction in which the point of the spherical surface on which the camera model is located points to the center of the sphere.
Optionally, the camera model may also observe the virtual character at a preset angle in different directions of the virtual character.
Referring to fig. 3, schematically, a point is determined in the virtual character 11 as a rotation center 12, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position above and behind the virtual character (for example, a rear position of the brain). Illustratively, as shown in fig. 3, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.
Optionally, the virtual world displayed by the virtual world screen includes: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
The map is a topographic map of the virtual world. Illustratively, a map refers to a model of the virtual world other than the virtual character. Illustratively, the map is an outer surface of the virtual world model. Illustratively, there is a first type plane and a second type plane on the map. Illustratively, the first type of plane and the second type of plane are different planes on the map.
The first virtual character is a virtual character that moves within the virtual world. The first virtual character can be a virtual character controlled by a user or a virtual character controlled by artificial intelligence. Illustratively, the first avatar is an avatar controlled by a first user operating the terminal.
The second avatar is an avatar controlled by artificial intelligence. Such as monsters, soldiers, zombies, mortuary bodies, and the like.
The first type plane is a plane in the virtual world in which the second avatar can walk continuously. The first type of plane is a continuous, flat plane in the virtual world on which the virtual character can stand. Illustratively, the first type of plane is a plane that the avatar can reach by walking in the virtual world. Illustratively, the first type of plane is also referred to as a walking reachable plane.
The second type plane is a plane that cannot be reached by the travel pattern on the first type plane.
The travel pattern on the first type plane includes: at least one of walking, running, riding, driving a vehicle, crawling. The second type plane is a plane that the virtual character in the first type plane cannot reach in the above-described traveling manner. Illustratively, the second type of plane is also referred to as a walking-inaccessible plane.
Illustratively, the second type plane is a plane having a height difference from the first type plane. Illustratively, the second-type plane is a higher plane than the first-type plane; alternatively, the second type plane is a lower plane than the first type plane.
Illustratively, a connection surface is arranged between the first type plane and the second type plane, the inclination angle of the connection surface is too large, the virtual character cannot stand on the connection surface, and therefore, the virtual character cannot reach the second type plane through the connection surface in a traveling manner on the first type plane. For example, the first type plane is the ground, the second type plane is the upper surface of a container placed on the ground, the ground and the upper surface of the container are directly connected with the side surface (connecting surface) of the container, and the virtual character cannot reach the upper surface of the container by walking because the virtual character cannot stand on the side surface of the container.
And 204, controlling the second virtual character to chase the first virtual character on the first type plane.
And the terminal controls the second virtual character to chase the first virtual character on the first type plane.
Illustratively, the location of the second avatar is located in the first type plane.
The pursuit is a process in which the second virtual character controls the second virtual character to reach the position of the first virtual character with the position of the first virtual character as a target end point. For example, when a first virtual character moves in the virtual world, the target end point of a second virtual character changes along with the change of the position of the first virtual character.
And step 206, controlling the second virtual character to cross over to the second type plane when the second virtual character travels to the junction of the first type plane and the second type plane.
And when the second virtual character travels to the junction of the first type plane and the second type plane, the terminal controls the second virtual character to cross the second type plane.
The junction of the first type plane and the second type plane is an area close to the second type plane on the first type plane. Illustratively, the intersection of the first type plane and the second type plane is an edge area of the first type plane closest to the second type plane, for example, the first type plane is a ground surface, the second type plane is a roof, and the intersection of the first type plane and the second type plane is a vicinity of an intersection line of the house and the ground surface.
Illustratively, when the second avatar travels to the intersection of the first type plane and the second type plane, it may be further replaced with: when the second avatar chases the first avatar traveling from the first type of plane to the second type of plane; when the first avatar travels to the second type of plane; when the first virtual character is positioned on the second type plane and the second virtual character is positioned at the junction of the first type plane and the second type plane; when the second avatar is close to the second type of plane; any one of them.
Flipping is a type of travel pattern for a virtual character, and includes: at least one of climbing, jumping, leaping, instant moving, climbing stairs, sliding and falling.
For example, when the first virtual character travels to the second type plane, the second virtual character cannot reach the second type plane in a manner of traveling on the first type plane, and at this time, the second virtual character reaches the second type plane in a manner of flipping over, and continues to chase the first virtual character.
Illustratively, as shown in fig. 4, in the virtual world, a second avatar 401 is pursuing a first avatar 402, and as the first avatar 402 travels to a second type plane 403, the second avatar 401 climbs from the first type plane 404 to the second type plane 403.
In summary, in the method provided in this embodiment, when the first avatar is located on the second type plane, the second avatar is controlled to move over the second type plane, so that the second avatar can reach the second type plane, and the strange problem caused by the fact that the second avatar cannot reach the second type plane in the traveling manner on the first type plane is solved. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
Illustratively, artificial intelligence controls the movement of virtual characters in the virtual world according to a navigation grid. The navigation grid is a collection of planes in the virtual world that a second virtual character can walk. The artificial intelligence obtains the current position of the second virtual character and the position of the first virtual character, and draws (generates) a pursuit route for the second virtual character to move from the current position to the position of the first virtual character on the navigation grid.
The pursuit route is a continuous line on the navigation grid, and is composed of at least two waypoints. The mode that the artificial intelligence controls the second virtual role to move according to the pursuit route is as follows: and controlling the second virtual character to move from the ith road point to the (i + 1) th road point on the pursuit route until the terminal point of the pursuit route (the position of the first virtual character) is reached.
The navigation grid is a collection of planes walkable by a second virtual character generated from the outer surface of the virtual world model. The walkable plane is a plane having a plane area large enough and a plane inclination angle small enough to allow the second virtual character to stand. For example, with a house on the ground, the external surface of the virtual world model has: the area of the three outer surfaces is enough for the second virtual character to stand on, but the side surface of the house is inclined too much, so the walking plane is the plane on the ground which is not covered by the house and the top surface of the house.
Walkable planes in a navigation grid have connected planes and disconnected planes. The connected plane and the unconnected plane are used for describing the connection relation of the two planes in the navigation grid, the connected plane is that the edges of the two planes are connected, or the edges of the two planes are connected with the third plane, or the edges of the two planes are connected through any connected plane; unconnected planes are planes where the edges of two planes are not connected and where two planes cannot be connected by any plane. For example, the navigation grid has a ground surface, a slope surface connected to the ground surface, and a roof surface of a house disconnected from the ground surface. Illustratively, the first type plane and the second type plane are two unconnected planes in the navigation grid.
In the related art, if two adjacent waypoints on the pursuit route are located on the same plane in the navigation grid or on two connected planes respectively, the second virtual character can reach the next waypoint from one waypoint by adopting the advancing mode on the first type plane; if two adjacent waypoints on the pursuit route are respectively located on two unconnected planes in the navigation grid, the second virtual character cannot reach the next waypoint from one waypoint by adopting the advancing mode on the first type plane. For example, if the first waypoint on the pursuit route is located on the ground and the second waypoint is located on the roof of the house, the second avatar cannot walk from the ground to the roof of the house.
The method and the device add the jumping point on the basis of the waypoint, namely the waypoint comprises at least one of a common waypoint and a jumping point. The waypoints which the second virtual character can reach by adopting the advancing mode on the first type plane are common waypoints, and the waypoints which the second virtual character can not reach by adopting the advancing mode on the first type plane are jump points. The jump point is a point connecting two unconnected planes in the navigation grid, and the pursuit route can enable the second virtual character to move between the two unconnected planes by connecting the two jump points on the navigation grid. For example, the second avatar may move from one hop to the next in a flipping manner, thereby enabling movement of the second avatar between two unconnected planes in the navigation grid.
Illustratively, the ordinary waypoints on the pursuit route are all located on the connected planes in the navigation grid, i.e., the ith ordinary waypoint and the (i + 1) th ordinary waypoint are located on the same plane in the navigation grid or respectively located on the two connected planes; each pair of jump points (start point and end point) on the pursuit route is located on two unconnected planes in the navigation grid, respectively.
Illustratively, the navigation grid is generated from a static virtual world model, such as: static scenes in the virtual world: the second virtual character can move freely in the static virtual world through the jumping points of the fixed models of the ground, buildings, plants, terrain and the like; but when dynamic obstacles appear in the virtual world, such as: the wire netting, the box and the like thrown by other virtual characters and used for blocking the second virtual character from advancing can not be crossed by the second virtual character due to the fact that no jumping point of the dynamic barrier exists in the navigation grid. Therefore, for the static scene and the dynamic barrier in the virtual world, different methods are respectively adopted in the method for enabling the artificial intelligence to control the second virtual role to cross.
Illustratively, a method for controlling a second virtual character to cross a static scene through artificial intelligence is provided.
Fig. 5 is a flowchart illustrating a method for using a virtual weapon by a virtual character according to another exemplary embodiment of the present application. The method may be applied in the first terminal 120 or the second terminal 160 in a computer system as shown in fig. 1 or in other terminals in the computer system. Unlike the method shown in fig. 2, step 206 is replaced with the following two steps.
Step 2061, when the second virtual character moves to the junction of the first type plane and the second type plane, the jumping point information of the first type plane and the second type plane is obtained.
When the first virtual character moves to the second type plane, the terminal acquires the jumping point information of the first type plane and the second type plane.
The jumping point information is used for describing the waypoints different from the common waypoints in the navigation grid, the navigation grid is a polygonal grid data structure used for describing the map, and the common waypoints are the waypoints which travel by adopting the traveling mode on the first type plane.
The jumping point information is used to connect information of the first type plane and the second type plane. The jumping point information is path information provided for artificial intelligence to control the second virtual character to reach the second type plane from the first type plane. Illustratively, the jumping point information is a set of data composed of any one point on the first type plane and any one point on the second type plane; or, the jumping point information is a straight line connecting the first type plane and the second type plane; or, the jumping point information is a straight line/a surface on the first type plane and a straight line/a surface on the second type plane, and the virtual character can optionally flip two points on the two straight lines/the two surfaces from the first type plane to the second type plane.
For example, the jumping point information of the first type plane and the second type plane may be multiple, for example, there are a point and B point on the first type plane, and there are C point and D point on the second type plane. The jumping point information of the first type plane and the second type plane may be: at least one of points a and C, points a and D, points B and C, and points B and D.
Illustratively, the hop information includes: a starting point located on the first type plane and an end point located on the second type plane. For example, the jumping-point information is a point a on the first type plane and a point C on the second type plane, the point a is a starting point and the point C is an end point.
Illustratively, as shown in fig. 4, the jumping point information includes a start point 405 on the first type plane 404 and an end point 406 on the second type plane 403.
Step 2062, controlling the second virtual character to cross to the second type plane according to the jumping point information.
And the terminal controls the second virtual role to climb to the second type plane according to the jumping point information.
And the terminal acquires the crossing path according to the jumping point information and controls the second virtual role to cross from the first type plane to the second type plane according to the crossing path.
Illustratively, the terminal controls the second avatar to roll over from the starting point to the ending point. And the terminal acquires a starting point on the first type plane and an end point on the second type plane according to the jumping point information, and controls the second virtual character to cross from the starting point on the first type plane to the end point on the second type plane.
Illustratively, the terminal can also determine a crossing mode according to the starting point and the end point, wherein the crossing mode comprises at least one of climbing, jumping, flying and instantaneous movement; the terminal controls the second virtual character to cross from the starting point to the end point in the crossing mode.
For example: and when the height of the end point is higher than that of the starting point, the terminal determines climbing as a climbing mode and controls the second virtual character to climb from the starting point of the first type plane to the end point of the second type plane. For example: and when the height of the starting point is higher than that of the end point, the terminal determines the jump as a mode of the turnover, and controls the second virtual character to jump from the starting point of the first type plane to the end point of the second type plane.
Illustratively, the terminal may also determine the flipping mode according to the jumping point information. For example, if a ladder connecting the first type plane and the second type plane is stored in the jumping point information, the terminal determines the ladder to be in a climbing mode and controls the second virtual character to reach the second type plane by climbing the ladder.
In summary, in the method provided in this embodiment, when the first avatar is located on the second type plane, the second avatar is controlled to cross the second type plane according to the skip point information by setting the skip point information, so that the second avatar can reach the second type plane, thereby solving the strange problem of the second avatar caused by the fact that the second avatar cannot reach the second type plane in the traveling manner on the first type plane. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
According to the method provided by the embodiment, the second virtual character can be flipped to the second type plane in different flipping modes by determining the flipping mode according to the starting point and the end point of the jumping point information, so that the flipping actions of different scenes are optimized, and the flipping actions are more real.
Illustratively, a method for controlling a second virtual character to cross a dynamic obstacle through artificial intelligence when the dynamic obstacle dynamically appears in a virtual world is provided.
Fig. 6 is a flowchart illustrating a virtual character control method in a virtual world according to another exemplary embodiment of the present application, which can be applied to the first terminal 120 or the second terminal 160 in the computer system shown in fig. 1 or other terminals in the computer system. The method comprises the following steps:
step 202, displaying a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map and move, and the map comprises a first type plane and a second type plane.
Step 2041, a pursuit route is obtained.
The terminal acquires the pursuit route.
Illustratively, the pursuit route is obtained by the terminal from the server.
Illustratively, the terminal uploads the position of the first avatar and the position of the second avatar to the server. And the server generates a pursuit route according to the position of the first virtual character, the position of the second virtual character and the navigation grid, and returns the pursuit route to the terminal.
The navigation grid is a set of movable areas of a second virtual character in the virtual world, is a polygon grid data structure for helping artificial intelligence navigation and routing in a complex space, and is composed of a plurality of convex polygons, adjacent polygons are connected with each other in the navigation grid, and non-connectable polygons are connected by using jumping point information.
And 2042, controlling the second virtual character to chase the first virtual character on the first type plane according to the chasing route.
And the terminal controls the second virtual role to chase the first virtual role on the first type plane according to the chasing route.
Illustratively, the second avatar follows the first avatar along a pursuit route.
And step 2063, when the dynamic obstacle exists on the pursuit route, controlling the second virtual character to cross the dynamic obstacle.
And when the dynamic barrier exists on the pursuit route, the terminal controls the second virtual character to cross the dynamic barrier.
Dynamic obstacles are obstacles that occur randomly in the virtual world. For example, the dynamic barrier may be a barrier placed by the first avatar or other avatars; alternatively, the dynamic obstacle is a randomly generated obstacle in the virtual world, such as a randomly occurring airdrop in the virtual world.
And when the terminal judges that the dynamic barrier exists on the pursuit route, controlling the second virtual character to cross the dynamic barrier.
Illustratively, as shown in fig. 7, there is a wire 701 placed by a first avatar 402 in the virtual world, and when the wire 701 is on the pursuit route, a second avatar 401 flips over the wire 701.
In summary, according to the method provided in this embodiment, when there is a dynamic obstacle on the pursuit route, the second virtual character is controlled to cross the dynamic obstacle, so that the second virtual character can bypass the dynamic obstacle, the monsters are more intelligent, and the control manner of the monsters is optimized. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
Illustratively, in the related art, the method for the server to generate the navigation grid is as follows:
firstly, a server reads a three-dimensional model of the virtual world and voxelizes the three-dimensional model of the virtual world. Voxelization is the replacement of the virtual world model with several cubes of the same size. Similar to pixelating a picture in two dimensions, a voxel is a pixel in three dimensions.
And secondly, setting a threshold value of the minimum area by the server, and extracting all unit surfaces with the area larger than or equal to the minimum area in the virtual world according to the voxel information of the virtual world. And setting a maximum inclination angle, and extracting walkable surfaces with inclination angles smaller than the maximum inclination angle from all the unit surfaces. That is, facets of the virtual world that can each be walked are extracted.
And thirdly, the server connects the walkable surfaces on the same plane to form a continuous non-overlapping area as large as possible. Namely, each small face which can be walked in the virtual world is combined into a large face.
Fourth, the server generates a contour of the region. That is, the edge line of the large face that can be walked in the virtual world is determined.
And fifthly, the server sets the maximum number of the convex sides, and according to the outline of the area, the area is divided into a plurality of convex sides with the number of the sides not larger than the maximum number of the sides. Because the navigation routing algorithm is based on convex polygons, a large surface capable of walking needs to be segmented into a plurality of convex polygons again, wherein the number of the sides of the convex polygons is not more than the maximum number of the sides. For example, if the maximum number of sides of the convex shape is four, the large surface on which the user can walk is divided into several quadrangles and triangles according to the side lines of the large surface.
And sixthly, the server generates a navigation grid.
As can be seen from the related art, the navigation grid is a plane that the virtual character can walk, which is divided according to the virtual world model. In the related art, in the second step, all planes with inclination angles larger than the maximum inclination angle are filtered, so that the originally connected planes are not connected, and part of walkable planes in the navigation grid are independent and unreachable planes. Illustratively, the first type plane and the second type plane are not connected in the navigation grid, i.e., the second avatar cannot reach the second type plane from the first type plane. For example, the plane of the roof is a walkable plane, but the inclination angle of the side of the house is too large, and the side is filtered, so that the plane of the roof is not connected with the ground through a navigation grid, and the second virtual character cannot reach the roof from the ground.
Since the unconnected planes in the navigation grid cannot be connected by common waypoints, i.e., two adjacent common waypoints cannot be located on the two unconnected planes respectively when the pursuit route is generated, a monster cannot reach the other plane from one of the two unconnected planes through the common waypoints.
In order to connect unconnected walkable planes in the navigation grid, the method and the device add jumping point information in the generation process of the navigation grid. Illustratively, the navigation grid generation method of the present application is shown in fig. 8, and the navigation grid generation process provided by the present application is completed by the server 140 shown in fig. 1. As is exemplary. The server comprises a static construction module, a block loading module, an input format module and a reorganization grid module. The method comprises the following steps:
step 701, the static construction module sends a virtual world model and jumping point information.
The virtual world model and the jumping point information are obtained by the server from the client. Illustratively, the jumping point information is data that is manually marked.
In step 702, the block loading module receives the virtual world model and the jumping point information.
In step 703, the block loading module sends a reception completion confirmation message.
The receiving completion confirmation information is used for confirming that the block loading module has received the virtual world model and the jumping point information.
In step 704, the static construction module receives the reception completion confirmation information.
Step 705, the static construction module sends a blocking instruction.
And step 706, the block loading module divides the virtual world model and the jumping point information into blocks according to the blocking instruction.
The block division is to divide the virtual world model and the jumping point information into a plurality of areas. Illustratively, the virtual world is square, the block division divides the virtual world into four small squares according to the central line of the square, and the virtual world model and the jumping point information in each small square are determined as one block.
In step 707, the block loading module returns the block data.
The block data is a virtual world model and skip point information after block division.
At step 708, the static build module receives the tile data.
Step 709, the static construction module sends the block data.
Step 710, the input format module receives the block data, changes the format of the block data, and obtains a second block data.
The second block data is the block data of which the format is changed by the input format module.
The input format module changes the format of the block data to another format that the packet network module may be in. And returning the second block data in the format recognizable by the restructuring network to the static construction module.
At step 711, the input format module returns the second tile data.
At step 712, the static build module receives the second chunk data.
At step 713, the static construction module sends the second tile data.
And 714, receiving the second block data by the reorganization grid module, and constructing the navigation grid by using the second block data in a multithreading mode.
And the recombination grid module simultaneously cures the virtual world models and the jumping point information of the blocks in a multithreading mode according to the blocking results of the virtual world models and the jumping point information to generate the navigation grid.
In step 715, the regroup grid module returns to the navigation grid.
In summary, in the method provided in this embodiment, the jumping point information is added to the navigation grid, so that the planes in the navigation grid that are not originally connected are connected through the jumping point information, and the second virtual character can reach the second type plane from the first type plane through the jumping point information. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
Illustratively, a method for a second avatar to traverse a dynamic obstacle is presented.
Fig. 9 is a flowchart illustrating a virtual character control method in a virtual world according to another exemplary embodiment of the present application, which can be applied to the first terminal 120 or the second terminal 160 in the computer system shown in fig. 1 or other terminals in the computer system. Unlike the method shown in fig. 6, step 2603 is replaced with the following six steps.
In step 301, when a dynamic obstacle exists on the pursuit route, a detection line is emitted from a first general road point of the pursuit route to a second general road point.
When a dynamic obstacle exists on the pursuit route, the terminal emits a detection line from a first common road point to a second common road point of the pursuit route.
Illustratively, the second common path point may also be a hop point.
Illustratively, the pursuit route includes at least two common waypoints in a navigation grid, the navigation grid being a polygonal grid data structure for describing a map. The waypoints are end points and turning points on the pursuit route, and two adjacent waypoints are connected by a straight line. Illustratively, there are three waypoints on the pursuit route: the start point, the first normal passage point (turning point) and the end point, the pursuit route is a straight line from the start point to the first normal passage point and a straight line from the first normal passage point to the end point.
For example, the terminal may emit a straight line from each general waypoint on the pursuit route to the next waypoint to determine whether the straight line between two adjacent waypoints is reachable.
For example, the terminal emits a detection line from a first common path point on the pursuit route to an adjacent second common path point, and if the detection line can reach the second common path point, the first common path point and the second common path point can be reached in a straight line. That is, the second avatar may travel on the first type plane from the first general path point to the second general path point.
Step 302, when the detection line collides with the dynamic barrier, it is determined that two adjacent common waypoints are blocked by the dynamic barrier and cannot be reached in a straight line.
When the detection line collides with the dynamic barrier, the terminal determines that two adjacent common waypoints are blocked by the dynamic barrier and cannot be reached in a straight line.
When the detection line collides with the dynamic barrier, an intersection point is generated between the dynamic barrier and the pursuit route, namely the dynamic barrier exists on the pursuit route, and at the moment, the terminal determines that two adjacent common waypoints are blocked by the dynamic barrier and cannot be reached in a straight line.
Step 303, acquiring a collision point of the detection line and the dynamic barrier, and a crossing end point corresponding to the dynamic barrier; the detection line is a line emitted from a first normal path point adjacent to the detection line to a second normal path point.
And the terminal acquires a collision point of the detection line and the dynamic barrier and a crossing end point corresponding to the dynamic barrier.
And the terminal acquires an intersection point, namely a collision point, of the detection line and the dynamic barrier according to the collision generated by the detection line and the dynamic barrier. And simultaneously acquiring a corresponding crossing terminal point of the dynamic barrier.
Illustratively, the dynamic barrier has a plurality of crossing end points, and the terminal determines one crossing end point according to the detection lines of the collision points. For example, the dynamic barrier is a wire mesh, the first side and the second side of the wire mesh have a first crossing end point and a second crossing end point, respectively, the detection line intersects the first side of the wire mesh at a collision point, and the terminal determines the second crossing end point which belongs to a plane opposite to the collision point as the crossing end point.
Step 304, the type of the dynamic obstacle is obtained.
The terminal acquires the type of the dynamic obstacle.
Illustratively, the type of the dynamic obstacle includes at least one of a size, a height, a pattern, presence or absence of characteristic information of the dynamic obstacle, the characteristic information being a characteristic of a surface of the dynamic obstacle, e.g., the surface of the dynamic obstacle has a ladder, a mesh, a staircase, etc.
And 305, determining a turning mode of the second virtual character according to the type of the dynamic obstacle.
And the terminal determines the turning mode of the second virtual role according to the type of the dynamic barrier.
Exemplarily, the terminal determines the crossing mode of the second virtual character according to the type of the dynamic obstacle; and controlling the second virtual character to cross the dynamic barrier from the first collision point to the cross terminal point in a crossing mode. For example, if the dynamic obstacle is a dynamic obstacle having a ladder on the surface two meters high, the second avatar climbs the dynamic obstacle from the collision point to the rollover end point in a rollover manner that climbs the ladder for a rollover period corresponding to a height of two meters.
And step 306, controlling the second virtual character to cross the dynamic barrier from the collision point to the cross terminal point in a crossing mode.
And the terminal controls the second virtual character to cross the dynamic barrier from the collision point to the cross terminal point in a crossing mode.
Illustratively, after the second virtual character reaches the crossing end point, the pursuit route is obtained again by taking the current position of the second virtual character as a starting point.
In summary, in the method provided in this embodiment, a detection line is sent from a common waypoint of the pursuit route to determine whether a dynamic obstacle exists on the pursuit route, and after the detection line detects the dynamic obstacle, the second virtual character is controlled to cross the dynamic obstacle by obtaining the collision point and the crossing end point, so that the second virtual character can bypass the dynamic obstacle, the monsters are more intelligent, and the control manner of the monsters is optimized. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
The embodiment of the virtual role control method in the virtual world provided by the application is applied to both sides of the terminal and the server.
Fig. 10 is a flowchart illustrating a virtual character control method in a virtual world according to another exemplary embodiment of the present application, which can be applied to the computer system shown in fig. 1. The method comprises the following steps:
step 202, the terminal displays a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map, and the map comprises a first type plane and a second type plane.
Step 1001, the terminal triggers a pursuit condition.
The pursuit condition is a condition under which the second virtual character starts pursuing the first virtual character. For example, the pursuit condition may be when the distance between the first virtual character and the second virtual character is less than a threshold; or when the first virtual character enters the attack range of the second virtual character and the grade of the first virtual character meets the supply condition of the second virtual character.
In step 1002, the terminal sends the positions of the first virtual character and the second virtual character.
And 1003, the server receives the positions of the first virtual character and the second virtual character and generates a pursuit route according to the navigation grid.
The server generates a pursuit route for the second virtual character based on the first virtual character and the position of the second virtual character and the navigation grid.
A pursuit route is a data set consisting of at least two waypoints. The artificial intelligence controls the second virtual character to move from the ith waypoint to the (i + 1) th waypoint in a moving mode determined by the waypoint by acquiring the waypoint on the pursuit route.
At step 1004, the server sends the pursuit route.
Step 2041, the terminal obtains the pursuit route.
And the terminal acquires the pursuit route, reads the jumping point information in the pursuit route and prestores the jumping point information.
And 2042, the terminal controls the second virtual character to chase the first virtual character on the first type plane according to the chasing route.
Step 2061, when the second virtual character moves to the junction of the first type plane and the second type plane, the terminal obtains the jumping point information of the first type plane and the second type plane.
And when the second virtual character moves to the junction of the first type plane and the second type plane, the terminal reads the jumping point information and determines a starting point and an end point according to the jumping point information.
In step 1005, the terminal determines a crossing mode according to the starting point and the end point.
In step 1006, the terminal controls the second avatar to flip from the starting point to the ending point in a flipping manner.
The terminal controls the second avatar to flip from the start point of the first type plane to the end point of the second type plane in a determined flip manner.
In step 301, when a dynamic obstacle exists on the pursuit route, the terminal emits a detection line from a first common road point adjacent to the first common road point to a second common road point in the pursuit route.
Step 302, when the detection line collides with the dynamic barrier, the terminal determines that two adjacent common waypoints are blocked by the dynamic barrier and cannot be reached in a straight line.
And step 303, the terminal acquires a collision point of the detection line and the dynamic barrier and a crossing end point corresponding to the dynamic barrier.
Step 304, the terminal acquires the type of the dynamic obstacle.
And 305, the terminal determines the turning mode of the second virtual character according to the type of the dynamic barrier.
And step 306, the terminal controls the second virtual character to cross the dynamic barrier from the collision point to the cross terminal point in a crossing mode.
In summary, in the method provided in this embodiment, the server sends the terminal the position of the first virtual character and the position of the second virtual character and the pursuit route of the navigation grid, and the terminal controls the second virtual character to pursue the first virtual character according to the pursuit route. When the pursuit route has the jumping point information, controlling the second virtual role to cross from the first type plane to the second type plane according to the jumping point information; and when the dynamic barrier exists on the pursuit route, acquiring a collision point and a turning over terminal point of the pursuit route and the dynamic barrier by using the detection line, and controlling the second virtual character to turn over the dynamic barrier. The strange problem that the second virtual character cannot reach the second type plane in the traveling mode on the first type plane and therefore the strange problem is solved. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
The method for controlling the monster attack master virtual character by using the virtual character control method in the virtual world provided by the application is provided.
Fig. 11 is a flowchart illustrating a virtual character control method in a virtual world according to another exemplary embodiment of the present application, which can be applied to the first terminal 120 or the second terminal 160 in the computer system shown in fig. 1 or other terminals in the computer system. The method comprises the following steps:
step 1101, determine if the virtual character is within the skill range of the monster.
After the virtual character enters the pursuit range of the monster, the terminal judges whether the virtual character is in the skill range of the monster. If the virtual character is within the pursuit range of the monster, go to step 1107, otherwise go to step 1102.
Step 1102, monsters are chased.
And the terminal controls the monsters to chase the virtual characters, and the distance between the terminal and the virtual characters is shortened, so that the virtual characters enter the skill range of the monsters.
At step 1103, it is determined whether the monster is able to move.
The terminal judges whether the monster can move, if so, the step 1104 is carried out, otherwise, the step 1109 is carried out.
At step 1104, monsters seek enemies.
The terminal controls the monsters to search for the virtual roles and obtains the positions of the virtual roles.
Step 1105, determine whether there is a skip point on the pursuit route.
And the terminal judges whether the pursuit route has a jumping point, if so, the step 1106 is carried out, and if not, the step 1109 is carried out.
At step 1106, the monster jumps over the jump point.
And the terminal controls the monsters to have overtaking pursuit virtual characters according to the jumping point information.
Step 1107, determine if the monster is blocked in front.
The terminal judges whether the barrier exists between the monster and the virtual character, if so, the step 1109 is carried out, otherwise, the step 1108 is carried out.
Step 1108, determine if the monster is playable.
The terminal determines whether the monster is demonstrable, illustratively whether the skill of the monster is in a cool down period, if yes, then step 1110 is performed, otherwise step 1109 is performed.
At step 1110, the monster attacks the virtual character.
The terminal controls the monster to release skill to attack the virtual character.
And step 1111, fighting and leisure.
The terminal controls the monsters in a battle leisure state, illustratively, the monsters in battle leisure stand stationary in place.
In summary, in the method provided in this embodiment, when the virtual character is located at a position where the monster cannot reach, the monster is controlled to climb over to the position of the virtual character by setting the jumping point, so that the virtual character can be attacked. The problem of strange card is solved, and the strange object is more intelligent. Through the mobile mode that increases the second virtual character, artificial intelligence can control the second virtual character and reach arbitrary target position through the mode of crossing, makes artificial intelligence control virtual character's mode more press close to user control virtual character's mode, improves artificial intelligence's intelligent degree.
The following are embodiments of the apparatus of the present application, and for details that are not described in detail in the embodiments of the apparatus, reference may be made to corresponding descriptions in the above method embodiments, and details are not described herein again.
Fig. 12 is a schematic structural diagram illustrating a virtual world-based monster control apparatus according to an exemplary embodiment of the present application. The apparatus can be implemented as all or a part of a terminal by software, hardware or a combination of both, and includes: a display module 1201 and a control module 1203.
A display module 1201, configured to display a user interface, where the user interface includes a picture for observing a virtual character in a virtual world, the virtual world includes a first virtual character and a second virtual character that are located on a map, and the map includes a first type plane and a second type plane;
a control module 1203, configured to control the second virtual character to chase the first virtual character on the first type plane;
the control module 1203 is further configured to control the second avatar to climb over the second type plane when the second avatar travels to a junction between the first type plane and the second type plane;
wherein the second type plane is a plane that cannot be reached using the travel pattern on the first type plane.
In an optional embodiment, the apparatus further comprises: an acquisition module 1204;
the obtaining module 1204 is configured to obtain jumping point information of the first type plane and the second type plane, where the jumping point information is used to describe a waypoint in a navigation mesh that is different from a common waypoint, the navigation mesh is a polygonal mesh data structure used to describe the map, and the common waypoint is a waypoint that travels in a manner of traveling on the first type plane;
the control module 1203 is further configured to control the second virtual character to jump to the second type plane according to the jumping point information.
In an optional embodiment, the jumping point information includes: a start point located on the first type plane and an end point located on the second type plane;
the control module 1203 is further configured to control the second virtual character to climb from the starting point to the ending point.
In an optional embodiment, the apparatus further comprises: a determination module 1202;
the determining module 1202 is configured to determine a crossing manner according to the starting point and the end point, where the crossing manner includes at least one of climbing, jumping, and transient movement;
the control module 1203 is further configured to control the second avatar to climb from the starting point to the ending point in the climbing manner.
In an optional embodiment, the apparatus further comprises: an acquisition module 1204;
the obtaining module 1204 is configured to obtain a pursuit route;
the control module 1203 is further configured to control the second virtual character to chase the first virtual character on the first type plane according to the chase route;
the control module 1203 is further configured to control the second virtual character to cross the dynamic obstacle when the dynamic obstacle exists on the pursuit route.
In an alternative embodiment, the pursuit route comprises at least two common waypoints in a navigation mesh, the navigation mesh being a polygonal mesh data structure for describing the map;
the control module 1203 is further configured to control the second virtual character to cross the dynamic obstacle when two adjacent ordinary waypoints on the pursuit route are blocked by the dynamic obstacle and cannot be reached in a straight line.
In an optional embodiment, the apparatus further comprises: a detection module 1206 and a determination module 1202;
the detection module 1206 is used for emitting a detection line from a first common road point to a second common road point which are adjacent in the pursuit route;
the determining module 1202 is configured to determine that the two adjacent common waypoints are blocked by the dynamic obstacle and cannot be reached in a straight line when the detection line collides with the dynamic obstacle.
In an optional embodiment, the obtaining module 1204 is further configured to obtain a collision point of a detection line and the dynamic obstacle, and a crossing end point corresponding to the dynamic obstacle; the detection lines are lines emitted from adjacent first common path points to second common path points;
the control module 1203 is further configured to control the second avatar to climb over the dynamic obstacle from the collision point to the climbing end point.
In an optional embodiment, the apparatus further comprises: a determination module 1202;
the obtaining module 1204 is further configured to obtain a type of the dynamic obstacle;
the determining module 1202 is configured to determine a crossing manner of the second virtual character according to the type of the dynamic obstacle;
the control module 1203 is further configured to control the second avatar to climb over the dynamic obstacle from the collision point to the climbing destination in the climbing manner.
Referring to fig. 13, a block diagram of a computer device 1300 according to an exemplary embodiment of the present application is shown. The computer device 1300 may be a portable mobile terminal, such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4). Computer device 1300 may also be referred to by other names such as user equipment, portable terminal, etc.
Generally, computer device 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1302 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the virtual character control method in a virtual world provided herein.
In some embodiments, the electronic device 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The touch display 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The touch display 1305 also has the capability to collect touch signals on or over the surface of the touch display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. The touch display 1305 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the touch display 1305 may be one, providing the front panel of the electronic device 1300; in other embodiments, the touch display 1305 may be at least two, respectively disposed on different surfaces of the electronic device 1300 or in a folded design; in still other embodiments, the touch display 1305 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 1300. Even more, the touch screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The touch Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 is used to provide an audio interface between the user and the electronic device 1300. The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the electronic device 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used to locate a current geographic Location of the electronic device 1300 for navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1309 is used to provide power to various components within the electronic device 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic apparatus 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the touch display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the electronic device 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user on the electronic device 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1313 may be disposed on a side bezel of the electronic device 1300 and/or underlying the touch display 1305. When the pressure sensor 1313 is provided in the side frame of the electronic apparatus 1300, a user's grip signal for the electronic apparatus 1300 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 1313 is disposed on the lower layer of the touch display 1305, it is possible to control an operability control on the UI interface according to a pressure operation of the user on the touch display 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user to identify the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the electronic device 1300. When a physical button or vendor Logo is provided on the electronic device 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 can control the display brightness of the touch display screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
A proximity sensor 1316, also known as a distance sensor, is typically disposed on a front side of the electronic device 1300. The proximity sensor 1316 is used to capture the distance between the user and the front face of the electronic device 1300. In one embodiment, the processor 1301 controls the touch display 1305 to be used from the bright screen state to the rest screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the electronic device 1300 gradually decreases; the touch display 1305 is controlled by the processor 1301 to be used from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front surface of the electronic device 1300 becomes progressively larger.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting of the electronic device 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The present application further provides a terminal, including: the virtual role control system comprises a processor and a memory, wherein at least one instruction, at least one program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to realize the virtual role control method in the virtual world provided by the method embodiments.
The present application further provides a computer device, comprising: the virtual role control system comprises a processor and a memory, wherein at least one instruction, at least one program, a code set or an instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to realize the virtual role control method in the virtual world provided by the method embodiments.
The present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the virtual character control method in the virtual world provided by the above method embodiments.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (9)

1. A method for controlling a virtual character in a virtual world, the method comprising:
displaying a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map and move, and the map comprises a first type plane and a second type plane; the second virtual character is a virtual character controlled by artificial intelligence, and the second type plane is a plane which cannot be reached by adopting a traveling mode on the first type plane;
acquiring a pursuit route;
controlling the second virtual character to chase the first virtual character on the first type plane according to the chasing route; the pursuit route comprising at least two common waypoints in a navigation grid, the navigation grid being a polygonal grid data structure for describing the map;
when the second virtual character travels to the junction of the first type plane and the second type plane, controlling the second virtual character to cross the second type plane;
when two adjacent common road points on the pursuit route are blocked by a dynamic obstacle and cannot be reached in a straight line, acquiring a collision point of a detection line and the dynamic obstacle and a crossing terminal point corresponding to the dynamic obstacle; the dynamic obstacles are obstacles randomly appearing in the virtual world, and the detection line is a line emitted from a first adjacent common path point to a second common path point; controlling the second avatar to climb over the dynamic barrier from the collision point to the climb destination.
2. The method of claim 1, wherein said controlling the second avatar to flip to the second type plane comprises:
acquiring jumping point information of the first type plane and the second type plane, wherein the jumping point information is used for describing road points in the navigation grid different from common road points, and the common road points are road points which travel by adopting a traveling mode on the first type plane;
and controlling the second virtual role to climb to the second type plane according to the jumping point information.
3. The method of claim 2, wherein the hop information comprises: a start point located on the first type plane and an end point located on the second type plane;
the controlling the second avatar to flip to the second type plane includes:
and controlling the second virtual character to cross from the starting point to the end point.
4. The method of claim 3, wherein said controlling the second avatar to traverse from the starting point to the ending point comprises:
determining a crossing mode according to the starting point and the end point, wherein the crossing mode comprises at least one of climbing, jumping, leaping and transient movement;
and controlling the second virtual character to cross from the starting point to the end point in the crossing mode.
5. The method of claim 1, further comprising:
emitting the detection line from the first common road point adjacent to the chasing route to the second common road point;
when the detection line collides with the dynamic barrier, determining that the two adjacent common waypoints are blocked by the dynamic barrier and cannot be reached in a straight line.
6. The method of claim 1, further comprising:
acquiring the type of the dynamic obstacle;
the controlling the second avatar to climb over the dynamic barrier from the collision point to the climb destination includes:
determining a crossing mode of the second virtual role according to the type of the dynamic barrier;
controlling the second avatar to flip the dynamic barrier from the collision point to the flip end point in the flip manner.
7. An apparatus for controlling a virtual character in a virtual world, the apparatus comprising:
the display module is used for displaying a user interface, wherein the user interface comprises a picture for observing a virtual character in a virtual world, the virtual world comprises a first virtual character and a second virtual character which are positioned on a map and move, and the map comprises a first type plane and a second type plane; the second virtual character is a virtual character controlled by artificial intelligence, and the second type plane is a plane which cannot be reached by adopting a traveling mode on the first type plane;
the acquisition module is used for acquiring a pursuit route;
the control module is used for controlling the second virtual role to chase the first virtual role on the first type plane according to the chasing route; the pursuit route comprising at least two common waypoints in a navigation grid, the navigation grid being a polygonal grid data structure for describing the map;
the control module is further configured to control the second avatar to climb over the second type plane when the second avatar travels to a junction between the first type plane and the second type plane;
the acquisition module is further used for acquiring a collision point of a detection line and the dynamic barrier and a crossing end point corresponding to the dynamic barrier when two adjacent common waypoints on the pursuit route are blocked by the dynamic barrier and cannot be reached in a straight line; the dynamic obstacles are obstacles randomly appearing in the virtual world, and the detection line is a line emitted from a first adjacent common path point to a second common path point;
the control module is further configured to control the second avatar to climb over the dynamic barrier from the collision point to the climbing destination.
8. A computer device, characterized in that the computer device comprises: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual character control method in the virtual world according to any one of claims 1 to 6.
9. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the virtual character control method in the virtual world according to any one of claims 1 to 6.
CN201910959897.3A 2019-10-10 2019-10-10 Virtual role control method, device, equipment and storage medium in virtual world Active CN110681156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910959897.3A CN110681156B (en) 2019-10-10 2019-10-10 Virtual role control method, device, equipment and storage medium in virtual world

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910959897.3A CN110681156B (en) 2019-10-10 2019-10-10 Virtual role control method, device, equipment and storage medium in virtual world

Publications (2)

Publication Number Publication Date
CN110681156A CN110681156A (en) 2020-01-14
CN110681156B true CN110681156B (en) 2021-10-29

Family

ID=69111898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910959897.3A Active CN110681156B (en) 2019-10-10 2019-10-10 Virtual role control method, device, equipment and storage medium in virtual world

Country Status (1)

Country Link
CN (1) CN110681156B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111228804B (en) * 2020-02-04 2021-05-14 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for driving vehicle in virtual environment
CN111714891B (en) * 2020-06-22 2021-05-11 苏州幻塔网络科技有限公司 Role climbing method and device, computer equipment and readable storage medium
CN111773696B (en) * 2020-07-13 2022-04-15 腾讯科技(深圳)有限公司 Virtual object display method, related device and storage medium
CN112295225B (en) * 2020-11-02 2021-08-10 不鸣科技(杭州)有限公司 Multithreading updating method of way-finding grid
CN113559516B (en) * 2021-07-30 2023-07-14 腾讯科技(深圳)有限公司 Virtual character control method and device, storage medium and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105056528A (en) * 2015-07-23 2015-11-18 珠海金山网络游戏科技有限公司 Virtual character moving method and apparatus
CN106075906A (en) * 2016-06-03 2016-11-09 腾讯科技(深圳)有限公司 The method for searching of a kind of simulated object, the building method of scene and the device of correspondence
CN106110656A (en) * 2016-07-07 2016-11-16 网易(杭州)网络有限公司 At the method and apparatus that scene of game calculates route
CN106790224A (en) * 2017-01-13 2017-05-31 腾讯科技(深圳)有限公司 The method and server of a kind of control simulated object pathfinding
CN108463273A (en) * 2015-11-04 2018-08-28 Cy游戏公司 Mobile history based on player carries out the games system etc. of the path finding of non-gaming person role
CN108635853A (en) * 2018-03-23 2018-10-12 腾讯科技(深圳)有限公司 The control method and device of object, storage medium, electronic device
US20190105573A1 (en) * 2016-07-21 2019-04-11 Sony Interactive Entertainment America Llc Method and system for accessing previously stored game play via video recording as executed on a game cloud system
CN110180182A (en) * 2019-04-28 2019-08-30 腾讯科技(深圳)有限公司 Collision checking method, device, storage medium and electronic device
CN110193198A (en) * 2019-05-23 2019-09-03 腾讯科技(深圳)有限公司 Object jump control method, device, computer equipment and storage medium
CN110309236A (en) * 2018-02-28 2019-10-08 深圳市萌蛋互动网络有限公司 The method, apparatus, computer equipment and storage medium of pathfinding in map

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8092293B2 (en) * 2006-09-13 2012-01-10 Igt Method and apparatus for tracking play at a roulette table

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105056528A (en) * 2015-07-23 2015-11-18 珠海金山网络游戏科技有限公司 Virtual character moving method and apparatus
CN108463273A (en) * 2015-11-04 2018-08-28 Cy游戏公司 Mobile history based on player carries out the games system etc. of the path finding of non-gaming person role
CN106075906A (en) * 2016-06-03 2016-11-09 腾讯科技(深圳)有限公司 The method for searching of a kind of simulated object, the building method of scene and the device of correspondence
CN106110656A (en) * 2016-07-07 2016-11-16 网易(杭州)网络有限公司 At the method and apparatus that scene of game calculates route
US20190105573A1 (en) * 2016-07-21 2019-04-11 Sony Interactive Entertainment America Llc Method and system for accessing previously stored game play via video recording as executed on a game cloud system
CN106790224A (en) * 2017-01-13 2017-05-31 腾讯科技(深圳)有限公司 The method and server of a kind of control simulated object pathfinding
CN110309236A (en) * 2018-02-28 2019-10-08 深圳市萌蛋互动网络有限公司 The method, apparatus, computer equipment and storage medium of pathfinding in map
CN108635853A (en) * 2018-03-23 2018-10-12 腾讯科技(深圳)有限公司 The control method and device of object, storage medium, electronic device
CN110180182A (en) * 2019-04-28 2019-08-30 腾讯科技(深圳)有限公司 Collision checking method, device, storage medium and electronic device
CN110193198A (en) * 2019-05-23 2019-09-03 腾讯科技(深圳)有限公司 Object jump control method, device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
匿名.神庙逃亡之堡垒之夜英雄登场.《神庙逃亡之堡垒之夜英雄登场:https://haokan.baidu.com/v?vid=3891424503494064524》.2018,视频全长. *
神庙逃亡之堡垒之夜英雄登场;匿名;《神庙逃亡之堡垒之夜英雄登场:https://haokan.baidu.com/v?vid=3891424503494064524》;20181028;视频全长 *
面向3D场景智能寻路技术综述;高天寒;《计算机工程与应用》;20170115(第一期);第16-22页 *

Also Published As

Publication number Publication date
CN110681156A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110681156B (en) Virtual role control method, device, equipment and storage medium in virtual world
CN111589142B (en) Virtual object control method, device, equipment and medium
CN111035918B (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111228804B (en) Method, device, terminal and storage medium for driving vehicle in virtual environment
CN110665230B (en) Virtual role control method, device, equipment and medium in virtual world
CN110507994B (en) Method, device, equipment and storage medium for controlling flight of virtual aircraft
CN110613938B (en) Method, terminal and storage medium for controlling virtual object to use virtual prop
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN110755845B (en) Virtual world picture display method, device, equipment and medium
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN112121422B (en) Interface display method, device, equipment and storage medium
CN111420402B (en) Virtual environment picture display method, device, terminal and storage medium
CN111462307A (en) Virtual image display method, device, equipment and storage medium of virtual object
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN110801628B (en) Method, device, equipment and medium for controlling virtual object to restore life value
CN111338534A (en) Virtual object game method, device, equipment and medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN111298440A (en) Virtual role control method, device, equipment and medium in virtual environment
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN111389005A (en) Virtual object control method, device, equipment and storage medium
CN112402962A (en) Signal display method, device, equipment and medium based on virtual environment
CN113577765A (en) User interface display method, device, equipment and storage medium
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020121

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant