CN108031121B - Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game - Google Patents

Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game Download PDF

Info

Publication number
CN108031121B
CN108031121B CN201711446880.5A CN201711446880A CN108031121B CN 108031121 B CN108031121 B CN 108031121B CN 201711446880 A CN201711446880 A CN 201711446880A CN 108031121 B CN108031121 B CN 108031121B
Authority
CN
China
Prior art keywords
npc
state
blueprint
behavior
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711446880.5A
Other languages
Chinese (zh)
Other versions
CN108031121A (en
Inventor
盛世庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Snail Digital Technology Co Ltd
Original Assignee
Suzhou Snail Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Snail Digital Technology Co Ltd filed Critical Suzhou Snail Digital Technology Co Ltd
Priority to CN201711446880.5A priority Critical patent/CN108031121B/en
Publication of CN108031121A publication Critical patent/CN108031121A/en
Application granted granted Critical
Publication of CN108031121B publication Critical patent/CN108031121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for realizing the AI behavior of NPC in VR game includes calling NPC blueprint, the AI controller of NPC, the AI behavior tree of NPC and NPC cartoon blueprint to each other for realizing the AI behavior of NPC. The NPC blueprint is used for judging whether the NPC can walk freely or not, the interaction between the NPC and a game player and the interaction between the NPC and a virtual article in a game scene; the AI controller of the NPC is used for assigning the result of the NPC blueprint judgment to blackboard data of the AI behavior tree of the NPC in a variable mode; the AI behavior tree of the NPC judges behaviors of the NPC in various states and switching among the behaviors through the blackboard data; and the NPC animation blueprint determines the frame animation and montage (montage) which should be played by the NPC in each state according to the variable assignment of the NPC blueprint. The invention adopts an advanced AI behavior tree mode, realizes a method similar to environmental perception, enables NPC to be more vivid and lively and enables the experience of players to be more real.

Description

Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game
Technical Field
The invention relates to the field of VR games, in particular to a method for realizing AI behaviors of NPC in a VR game.
Background
At present, VR games are rapidly developed, and the user experience and interaction of the games become the most important aspects. The expression of AI behaviors in VR games is also becoming more and more important. At present, AI expression methods in VR games on the market are simple, AI behaviors are single, expression modes are not real, and user experience is poor.
The invention aims to solve the problem of single AI behavior in the prior art, so that an AI expression mode in a VR game is more real and user experience is better.
Disclosure of Invention
The invention provides a method for realizing AI behaviors of NPC in VR games, which is realized by the following technical scheme:
a method for implementing an AI behavior of an NPC in a VR game, the AI behavior of the NPC being implemented by an NPC blueprint, an AI controller of the NPC, an AI behavior tree of the NPC, and an NPC animation blueprint, the method comprising: the NPC blueprint is used for judging whether the NPC can walk freely or not, the interaction between the NPC and a game player and the interaction between the NPC and a virtual article in a game scene; the AI controller of the NPC is used for assigning the result of the NPC blueprint judgment to blackboard data of the AI behavior tree of the NPC in a variable mode; the AI behavior tree of the NPC judges behaviors of the NPC in various states and switching among the behaviors through the blackboard data; and the NPC animation blueprint determines the frame animation and MONTAGE which should be played by the NPC in each state according to the variable assignment of the NPC blueprint.
Further, the method for NPC to walk freely in the game scene is as follows:
starting a game scene;
the NPC blueprint initializes NPC state variables to an idle state;
the AI behavior tree of the NPC controls the behavior of the NPC under the condition branch that the state variable of the NPC is constantly equal to the idle state, and the AI behavior tree of the NPC searches for a random position in the routing grid of the game scene;
calling an intelligent mobile method to control the NPC to the random position;
the NPC animation blueprint receives the assignment of the NPC blueprint to the NPC state variable and calls an animation mixed space in a walking state;
the NPC playing animation walks to the random position.
Further, a Tick heartbeat function in the NPC blueprint processes logic judgment of the NPC and the environment, and after the game player enters the game scene, the NPC blueprint sets state variables of the NPC to escape or MONTAGE to realize interaction between the NPC and the game player.
The specific method for setting the state variable of the NPC as escape or MONTAGE by the NPC blueprint is as follows: setting the values of the probability variables of the escape state and the MONTAGE state, adding the values of the probability variables of the escape state and the MONTAGE state, removing a random number between zero and the sum of the values of the probability variables of the escape state and the MONTAGE state by using a coordinate conversion method, if the random number is less than or equal to the value of the probability variable of the escape state, setting the state as the escape state, otherwise, setting the state as the MONTAGE state.
In the escape state, calling a conditional branch with the NPC state being constantly equal to the escape state by the AI behavior tree of the NPC to control the behavior of the NPC, and searching a random position in a path searching grid of a game scene by the AI behavior tree of the NPC; the NPC animation blueprint calls an animation mixing space in a walking state; the NPC moves to the random position; after the NPC reaches the random position, the state of the NPC is set to idle. The NPC moves to the random position with a uniform deceleration motion.
When the NPC state is MONTAGE, the AI action tree of the NPC calls a conditional branch with the NPC state being identical to the MONTAGE state to control the NPC action, the MONTAGE resource is played, the NPC makes corresponding action, and the state of the NPC is set to be idle after the action is finished.
Further, the interaction method of the NPC and the virtual object is as follows:
under the conditional branch of the AI behavior tree of the NPC where the NPC state is identical to the idle state,
establishing an environment perception trigger range by taking the NPC position as a center, and capturing a virtual real object entering the range;
setting the first virtual real object as a target object, and if the target object is a null value, re-determining the target object;
when the target object is not a null value, the AI behavior tree of the NPC starts to call the logic control NPC behavior that the target object is not a null value conditional branch;
calling an intelligent moving method to enable the NPC to move to the target object;
the interactive variable of the NPC and the virtual object is assigned as true;
the NPC animation blueprint plays the interaction animation of the NPC and the virtual real object according to the assignment of the variable;
and after the interaction is finished, removing the virtual real object and setting the target object as a null value.
And converting the position of the virtual real object into the spatial position of the NPC by using a coordinate transformation method, assigning a value to the position, and realizing accurate interaction between the NPC and the virtual real object according to the assignment.
Further, the random position mentioned above is found within a grid of radius 200 and 500 centered on the position of the NPC.
The invention has the advantages that: the invention adopts an advanced AI behavior tree mode, realizes a method similar to environmental perception, enables NPC to be more vivid and lively and enables the experience of players to be more real.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a flow chart of ferry bird logical behavior determination of an embodiment of the present invention;
fig. 2 shows a flow chart of the behavior of ferry birds in accordance with an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings.
A method for realizing the AI behavior of NPC in VR game includes calling NPC blueprint, the AI controller of NPC, the AI behavior tree of NPC and NPC cartoon blueprint to each other for realizing the AI behavior of NPC. The NPC blueprint is used for judging whether the NPC can walk freely or not, the interaction between the NPC and a game player and the interaction between the NPC and a virtual article in a game scene; the AI controller of the NPC is used for assigning the result of the NPC blueprint judgment to blackboard data of the AI behavior tree of the NPC in a variable mode; the AI behavior tree of the NPC judges behaviors of the NPC in various states and switching among the behaviors through blackboard data; and the NPC animation blueprint determines the frame animation and MONTAGE which should be played by the NPC in each state according to the variable assignment of the NPC blueprint.
The NPC free walking method in the game scene comprises the following steps: starting a game scene; the NPC blueprint initializes the NPC state variable to an idle state; the AI behavior tree of the NPC controls the behavior of the NPC under the condition branch that the state variable of the NPC is constantly equal to the idle state, and the AI behavior tree of the NPC searches for a random position in a path-finding grid of a game scene; calling an intelligent mobile method to control the NPC to the random position; the NPC animation blueprint receives the assignment of the NPC blueprint to the NPC state variable and calls an animation mixed space in a walking state; the NPC playback animation walks to this random position. The random position is found within the grid of radius 200 and 500 centered on the position of the NPC.
The Tick heartbeat function in the NPC blueprint processes the logic judgment of the NPC and the environment, and after a game player enters a game scene, the NPC blueprint sets the state variable of the NPC as escape or MONTAGE to realize the interaction between the NPC and the game player.
The specific method for setting the state variable of the NPC as escape or MONTAGE by the NPC blueprint is as follows: setting the values of the probability variables of the escape state and the MONTAGE state, adding the values of the probability variables of the escape state and the MONTAGE state, removing a random number between zero and the sum of the values of the probability variables of the escape state and the MONTAGE state by using a coordinate conversion method, if the random number is less than or equal to the value of the probability variable of the escape state, setting the state as the escape state, otherwise, setting the state as the MONTAGE state.
In the escape state, the AI behavior tree of the NPC calls a conditional branch of which the NPC state is constantly equal to the escape state to control the behavior of the NPC, and the AI behavior tree of the NPC finds a random position in a path-finding grid of a game scene; the NPC animation blueprint calls an animation mixing space in a walking state; the NPC moves to the random position; after the NPC reaches the random position, the state of the NPC is set to idle. The NPC moves to the random position with a uniform deceleration motion. The random position is found within the grid of radius 200 and 500 centered on the position of the NPC.
When the NPC state is MONTAGE, the AI action tree of the NPC calls a conditional branch with the NPC state being identical to the MONTAGE state to control the NPC action, the MONTAGE resource is played, the NPC makes corresponding action, and the state of the NPC is set to be idle after the action is finished.
The interaction method of the NPC and the virtual object is as follows:
under the conditional branch of the AI behavior tree of the NPC where the NPC state is identical to the idle state,
establishing an environment perception trigger range by taking the NPC position as a center, and capturing a virtual real object entering the range; setting the first virtual real object as a target object, and if the target object is a null value, re-determining the target object; when the target object is not a null value, the AI behavior tree of the NPC starts to call the logic control NPC behavior that the target object is not a null value conditional branch; calling an intelligent moving method to enable the NPC to move to the target object; the interactive variable of the NPC and the virtual object is assigned as true; the NPC animation blueprint plays the interaction animation of the NPC and the virtual real object according to the assignment of the variable; and after the interaction is finished, removing the virtual real object and setting the target object as a null value.
In order to realize accurate interaction between the NPC and the virtual real object, the position of the virtual real object is converted into the spatial position of the NPC by using a coordinate transformation method, the position is assigned, and the interaction between the NPC and the virtual real object is realized according to the assignment.
The technical scheme of the invention is explained by a representative embodiment, and the method for realizing the AI behavior of the ferry in the VR game is provided. The ferry AI performance is realized by ferry blueprint, ferry AI controller, ferry AI behavioral tree, and ferry animation blueprint inter-modulation.
The ferry blueprint is used for judging whether the ferry is leisurely to-and-fro swiftly, judging whether the ferry is close to a handle of a player, and judging whether the ferry sees food to run away to eat.
The ferry AI controller is used for assigning the judgment results of the ferry blueprints to blackboard data of the ferry AI behavior tree in a variable mode, such as state variables eDodostate of the ferry and found food objects ObjectFood.
The ferry AI behavior tree is used for judging behaviors of the ferry in various states and switching among the behaviors through blackboard data.
The ferry bird animation blueprint has the function of determining what frame animation and single should be played under each state of the ferry bird according to the variable assignment of the ferry bird blueprint.
The AI performance of a ferry will now be described in greater detail, with reference to fig. 1, which illustrates a flow chart for logical behavior determination of a ferry.
When the initialization of the eDoDoDo State variable of the ferry blueprint is DODO _ STATE _ IDLE after the scene is started, the AI behavior tree of the ferry can control the behavior of the ferry under the condition branch that eDoDo State is equal to (constant equal to) DODO _ STATE _ IDLE, and the behavior tree of the ferry can find a random position in the routing grid in the scene, wherein the random position is found in the range outside the radius 200 grid and within the range within the 500 grid which takes the position of the ferry as the center. At the moment, the AI behavior tree calls an AIMoveTo method to control the ferry to Walk to the position, meanwhile, the animation blueprint of the ferry receives the assignment of an eDoDOState variable of the ferry blueprint, the animation blueprint calls an animation mixed space of Dodo _ Walk _ BS in a walking state (the smooth processing from static to walking), and at the moment, the ferry plays the walking animation and moves to the target position.
After a PLAYER enters a scene, a PLAYER is added to a container by a PLAYER container variable of the ferry blueprint, the ferry blueprint circularly judges whether the distance difference between the position of a left handle and a right handle of each PLAYER and the position of the ferry is less than or equal to a variable configuration value of fMaxdagertance, if so, the ferry blueprint sets a variable eDodostate as DODO _ STATE _ FLEE _ FOR _ PLAYER or DODO _ STATE _ MONTAGE _ FOR _ PLAYER, and the variable configuration value is controlled by probability.
1) The value of the probability variable set to DODO _ STATE _ FLEE _ FOR _ PLAYER is FleeProb ability, the value of the probability variable set to DODO _ STATE _ MONTAGE _ FOR _ PLAYER is MontageProb ability, FleeProb ability and MontageProb ability are now added, and the random IntegerInRange method is called, i.e., the value of 0 added to FleeProb ability and MontageProb ability is added to a random number which is set to DODO _ STATE _ FLEE _ FOR _ PLAYER if Floob ability is < (less than or equal to), and vice versa.
2) A condition where eDodostate is set to DODO _ STATE _ FLEE _ FOR _ PLAYER. In this case, the AI behavior tree may invoke a conditional branch of eDodoState _ flag _ flie _ FOR _ plane to control the behavior of the ferry bird, and similarly, the behavior tree may find a random position in the routing grid of the scene, where the random position is also found within a range of a radius 200 grid and a radius 500 grid centered on the position of the ferry bird. And the animation blueprint of the ferry bird can call the animation mixed space of Dodo _ Walk _ BS in the walking state, and the ferry bird moves the walking animation to the target position. But the difference is that the ferry does not do uniform motion but does do uniform deceleration motion, at this time, the advantage of the animation mixed space of Dodo _ Walk _ BS is reflected, the wing inciting frequency of the ferry becomes smooth and slow, so that the player feels that the ferry walks more really, and the abrupt effect that the ferry stops suddenly can not occur. After the target position is reached, the eDodoState variable is set to do _ STATE _ IDLE.
3) When the eDodoState is set to do _ STATE _ monitor _ FOR _ PLAYER, the AI behavior tree calls a conditional branch of do _ STATE _ monitor _ FOR _ PLAYER to control the behavior of the ferry, at this time, the do _ Happy _ monitor resource is played, the ferry is excited, 1 second is waited after the action is played, and then the eDodoState variable is set to do _ STATE _ IDLE.
The AI action tree also handles the logic of whether a ferry finds food under the conditional branch of eDodoState ═ (constant equal) DODO _ STATE _ IDLE, that is, when the food object ObjectFood is not NULL. The logic for ObjectFood setting is implemented on the ferry blueprint by:
1) firstly, a trigger range similar to environmental perception is established by taking the position of a ferry bird as a center (the trigger range is established by calling a MultiSphereTraceForObjects method), all foods entering the trigger range similar to the environmental perception are captured, and only the food entering the first range is set to the ObjectFood.
2) In the case where ObjectFood is not NULL, the AI activity tree may begin to call logic that controls the behavior of the ferry if ObjectFood is not a NULL conditional branch. The method of AIMoveTo is called first to enable the ferry to move to the side of food quickly (the same as the processing method that the ferry runs to a random position far away from a player), then the variable value of bStartEat of the ferry is set to true, and the ferry animation blueprint can play the Dodo-Graze frame animation of the ferry to eat the food by assigning the value of bStartEat to true.
3) To accurately eat food, a method of increasing IK is invoked. IK is realized according to inverse dynamics, the position of food is firstly transferred into the space position of a ferry by invoking InverseTransformLocation, the obtained position is assigned to targetPos, the targetPos is used as the parameter input of SetupTask, and the accurate position of the food can be found by stretching and retracting the mouth of the ferry. (playing the Dodo-size frame animation of a ferry bird destroys the food while setting the food object ObjectFood to NULL).
In combination with the actual use condition of the invention, the player can have a better interaction mode with the ferry birds in the scene in the VR glasses, so that the player can feel real, the player can feel that the ferry birds are seen in a real environment, and the reality degree of the player in the experience game is improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A method for implementing an AI behavior of an NPC in a VR game, the AI behavior of the NPC being implemented by an NPC blueprint, an AI controller of the NPC, an AI behavior tree of the NPC, and an NPC animation blueprint, the method comprising: the NPC blueprint is used for judging whether the NPC can walk freely or not, the interaction between the NPC and a game player and the interaction between the NPC and a virtual article in a game scene; the AI controller of the NPC is used for assigning the result of the NPC blueprint judgment to blackboard data of the AI behavior tree of the NPC in a variable mode; the AI behavior tree of the NPC judges behaviors of the NPC in various states and switching among the behaviors through the blackboard data; the NPC animation blueprint determines frame animation and MONTAGE which should be played by the NPC in each state according to the variable assignment of the NPC blueprint;
the Tick heartbeat function in the NPC blueprint processes logic judgment of NPC and environment, and after the game player enters the game scene, the NPC blueprint sets the state variable of the NPC as escape or MONTAGE to realize the interaction between the NPC and the game player;
in the escape state, the NPC animation blueprint calls an animation mixing space;
the specific method for setting the state variable of the NPC as escape or MONTAGE by the NPC blueprint is as follows: setting the values of the probability variables of the escape state and the MONTAGE state, adding the values of the probability variables of the escape state and the MONTAGE state, removing a random number between zero and the sum of the values of the probability variables of the escape state and the MONTAGE state by using a coordinate conversion method, if the random number is less than or equal to the value of the probability variable of the escape state, setting the state as the escape state, otherwise, setting the state as the MONTAGE state.
2. The method of claim 1, wherein the NPC is free to walk in a game scene by a method of implementing AI behavior of the NPC in the VR game as follows:
the NPC blueprint initializes NPC state variables to an idle state;
the AI behavior tree of the NPC controls the behavior of the NPC under the condition branch that the state variable of the NPC is constantly equal to the idle state, and the AI behavior tree of the NPC searches for a random position in the routing grid of the game scene;
calling an intelligent mobile method to control the NPC to the random position;
the NPC animation blueprint receives the assignment of the NPC blueprint to the NPC state variable and calls an animation mixed space in a walking state;
the NPC playing animation walks to the random position.
3. The method of claim 1, wherein the AI behavior of the NPC in the VR game is implemented by: in the escape state, calling a conditional branch with the NPC state being constantly equal to the escape state by the AI behavior tree of the NPC to control the behavior of the NPC, and searching a random position in a path searching grid of a game scene by the AI behavior tree of the NPC; the NPC animation blueprint calls an animation mixing space in a walking state; the NPC moves to the random position; after the NPC reaches the random position, the state of the NPC is set to idle.
4. The method of claim 3, wherein the AI behavior of the NPC in the VR game is as follows: the NPC moves to the random position with a uniform deceleration motion.
5. The method of claim 4, wherein the AI behavior of the NPC in the VR game is as follows: when the NPC state is MONTAGE, the AI action tree of the NPC calls a conditional branch with the NPC state being identical to the MONTAGE state to control the NPC action, the MONTAGE resource is played, the NPC makes corresponding action, and the state of the NPC is set to be idle after the action is finished.
6. The method of claim 1, wherein the method for enabling the NPC to interact with the virtual reality comprises the following steps:
under the conditional branch of the AI behavior tree of the NPC where the NPC state is identical to the idle state,
establishing an environment perception trigger range by taking the NPC position as a center, and capturing a virtual real object entering the range;
setting the first virtual real object as a target object, and if the target object is a null value, re-determining the target object;
when the target object is not a null value, the AI behavior tree of the NPC starts to call the logic control NPC behavior that the target object is not a null value conditional branch;
calling an intelligent moving method to enable the NPC to move to the target object;
the interactive variable of the NPC and the virtual object is assigned as true;
the NPC animation blueprint plays the interaction animation of the NPC and the virtual real object according to the assignment of the variable;
and after the interaction is finished, removing the virtual real object and setting the target object as a null value.
7. The method of claim 6, wherein the position of the NPC is transformed into the spatial position of the NPC by a coordinate transformation method, the position is assigned, and the NPC interacts with the NPC accurately according to the assignment.
8. The method as claimed in claim 2 or 3, wherein the random position is found within a grid of 200 and 500 radii with the position of the NPC as the center.
CN201711446880.5A 2017-12-27 2017-12-27 Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game Active CN108031121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711446880.5A CN108031121B (en) 2017-12-27 2017-12-27 Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711446880.5A CN108031121B (en) 2017-12-27 2017-12-27 Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game

Publications (2)

Publication Number Publication Date
CN108031121A CN108031121A (en) 2018-05-15
CN108031121B true CN108031121B (en) 2021-05-28

Family

ID=62097529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711446880.5A Active CN108031121B (en) 2017-12-27 2017-12-27 Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game

Country Status (1)

Country Link
CN (1) CN108031121B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188212A (en) * 2019-05-21 2019-08-30 浙江开奇科技有限公司 Image treatment method and terminal device for digital guide to visitors
CN111298439B (en) * 2020-01-21 2021-04-13 腾讯科技(深圳)有限公司 Data processing method, device, medium and electronic equipment
CN112121410B (en) * 2020-10-22 2024-04-12 深圳市瑞立视多媒体科技有限公司 VR game-based cabinet-entering method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101265051B1 (en) * 2012-05-30 2013-05-16 주식회사 쏘그웨어 Method for providing non player character artificial intelligence according to game user level
CN106775703A (en) * 2016-12-09 2017-05-31 网易(杭州)网络有限公司 Using the processing method and processing device of logic

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101265051B1 (en) * 2012-05-30 2013-05-16 주식회사 쏘그웨어 Method for providing non player character artificial intelligence according to game user level
CN106775703A (en) * 2016-12-09 2017-05-31 网易(杭州)网络有限公司 Using the processing method and processing device of logic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ue4中的AI和行为树(BluePrint版);墨痕_;《https://blog.csdn.net/yangxuan0261/article/details/50272365》;20151212;1-7页 *
UE4之AI原地巡逻( 2 );董国政;《https://blog.csdn.net/qq_36409711/article/details/73480213》;20170620;1-2页 *

Also Published As

Publication number Publication date
CN108031121A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
US11318390B2 (en) Systems and methods for hardware-based matchmaking
CN108031121B (en) Method for realizing AI (Artificial intelligence) behavior of NPC (neutral Point) in VR (virtual reality) game
CN108597530B (en) Sound reproducing method and apparatus, storage medium and electronic apparatus
WO2022001652A1 (en) Virtual character control method and apparatus, computer device, and storage medium
US20220032186A1 (en) Operation control display method and apparatus based on virtual scene
CN111298433B (en) Animation video processing method and device, electronic equipment and storage medium
US10758826B2 (en) Systems and methods for multi-user editing of virtual content
US11783523B2 (en) Animation control method and apparatus, storage medium, and electronic device
JP7137718B2 (en) Virtual object selection method, apparatus, equipment and computer program
US20230245385A1 (en) Interactive method and apparatus based on virtual scene, device, and medium
TW202227172A (en) Method of presenting virtual scene, device, electrical equipment, storage medium, and computer program product
CN113209618B (en) Virtual character control method, device, equipment and medium
US20220168656A1 (en) Virtual operation object control method and apparatus, storage medium, and electronic device
KR20220163452A (en) Interaction processing method of virtual props, device, electronic device and readable storage medium
WO2023288034A1 (en) Spatialized audio chat in a virtual metaverse
CN115888119A (en) Game AI training method, device, electronic equipment and storage medium
US20230347247A1 (en) Virtual character control method and apparatus, storage medium, and electronic device
US20130095931A1 (en) Data management for computer systems
Jain et al. Time-scaled interactive object-driven multi-party VR
Magdy et al. Deep reinforcement learning approach for augmented reality games
CN116196611A (en) Somatosensory game method based on waving action
CN112370782B (en) Auxiliary resource display method and device, storage medium and electronic equipment
US10668384B2 (en) System using rule based techniques for handling gameplay restrictions
CN114307150A (en) Interaction method, device, equipment, medium and program product between virtual objects
Nakayama et al. Teleoperated service robot with an immersive mixed reality interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant