CN113975799A - Object action control method, device, equipment and storage medium - Google Patents

Object action control method, device, equipment and storage medium Download PDF

Info

Publication number
CN113975799A
CN113975799A CN202111260109.5A CN202111260109A CN113975799A CN 113975799 A CN113975799 A CN 113975799A CN 202111260109 A CN202111260109 A CN 202111260109A CN 113975799 A CN113975799 A CN 113975799A
Authority
CN
China
Prior art keywords
action
prediction model
act
successfully
combination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111260109.5A
Other languages
Chinese (zh)
Inventor
彭炎亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pudong Development Bank Co Ltd
Original Assignee
Shanghai Pudong Development Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pudong Development Bank Co Ltd filed Critical Shanghai Pudong Development Bank Co Ltd
Priority to CN202111260109.5A priority Critical patent/CN113975799A/en
Publication of CN113975799A publication Critical patent/CN113975799A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for controlling object actions. The method can comprise the following steps: when a second object belonging to the same object team with the first object is predicted to have an action demand on an action object of the first object based on the first action prediction model, whether action success can be achieved when the first object and the second object act on the action object together is predicted; if the object combination comprising the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, predicting whether the action can be successfully performed when the first object and the second object jointly act on the action object when the action demand of the second object on the action object is predicted on the basis of the second action prediction model; and when the object combination is determined to be successful in acting on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object. The technical scheme of the embodiment of the invention can realize the cooperative action among the objects.

Description

Object action control method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for controlling object actions.
Background
Nowadays, with the rapid development of various technologies, there are more and more application scenarios in which robots perform certain tasks instead of human beings. However, less intelligent robots are essentially single acting when performing tasks, which makes it difficult for them to perform tasks with high quality.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for controlling object actions, which solve the problem that the objects cannot act cooperatively.
In a first aspect, an embodiment of the present invention provides a method for controlling an object action, which may include:
acquiring a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object;
whether the second object has an action demand on the action object is predicted based on the first action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object act on the action object together is predicted;
if the object combination comprising the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, predicting whether the second object has an action demand on the action object based on the second action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object jointly act on the action object;
and when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object.
In a second aspect, an embodiment of the present invention further provides an apparatus for controlling an object action, which may include:
the model acquisition module is used for acquiring a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object;
the first action prediction module is used for predicting whether the second object has an action demand on the action object or not based on the first action prediction model, and if so, predicting whether the action can be successfully carried out or not when the first object and the second object act on the action object together;
the second action prediction module is used for predicting whether the second object has an action requirement on the action object or not based on the second action prediction model if the action object can be successfully acted by the object combination comprising the first object and the second object according to the prediction result of the first action prediction model, and predicting whether the action object can be successfully acted or not when the first object and the second object act on the action object together if the action object has the action requirement on the action object;
and the motion control module is used for controlling the object combination to move the motion object when the object combination is determined to be capable of successfully moving the motion object according to the prediction result of the second motion prediction model.
In a third aspect, an embodiment of the present invention further provides a device for controlling an object action, where the device may include:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for controlling the actions of the object provided by any embodiment of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for controlling the object action provided in any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object are obtained; whether the second object has an action demand on the action object is predicted based on the first action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object act on the action object together is predicted; if the object combination comprising the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, namely when the first object determines to act on the action object, whether the second object has an action demand on the action object can be predicted again based on the second action prediction model, if so, whether the action can be successfully acted when the first object and the second object commonly act on the action object is predicted; when it is determined that the object combination can successfully act on the action object according to the prediction result of the second action prediction model, the object combination is controlled to act on the action object. According to the technical scheme, the action of the object and/or the action of the object which belongs to the same object team and is not determined is predicted through the action prediction model of the object, and when the object combination comprising the objects respectively corresponding to the at least two action prediction models is predicted by the at least two action prediction models to be capable of successfully acting on the action object, the objects in the object combination are controlled to perform team cooperation to act on the action object, so that the effect of cooperative action among the objects is achieved.
Drawings
FIG. 1 is a flowchart of a method for controlling object actions according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a method for controlling object actions according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a method for controlling object actions according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of various leading points and various positions in a control method of an object motion in a third embodiment of the present invention;
FIG. 5 is a block diagram showing the structure of a control device for controlling an operation of an object according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a control device for object motion in the fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before the embodiment of the present invention is described, an application scenario of the embodiment of the present invention is exemplarily described: for the application scenario set forth in the background, where a robot performs some tasks instead of a human being, the following is exemplified: the game system may include a game machine, and a game machine, wherein the game machine, and a game. Currently, each robot is basically a separate action, such as separately fighting a live player, separately carrying an item, separately assembling an item, and the like. However, in some cases, it is difficult for a robot to perform tasks with high quality, such as a single robot having insufficient intelligence to resist a real player, a heavy object being difficult to carry, a complex object being difficult to assemble, etc., and the robots are required to perform team cooperation to perform tasks together. Therefore, how to perform team cooperation among robots is an urgent technical problem to be solved.
It should be noted that, on one hand, the above is only described by a robot example, and in practical application, the object may also be an unmanned aerial vehicle, an unmanned ship, and the like, which have a certain automation performance, and is not specifically limited herein. On the other hand, for better visualization understanding of the following embodiments, the game of confrontation type may be exemplified, but the game of confrontation type is only one application scenario of the following embodiments, and is not a specific limitation to the application scenario.
Example one
Fig. 1 is a flowchart of a method for controlling an object action according to a first embodiment of the present invention. The present embodiment can be applied to a case where an object action is controlled by predicting whether or not team cooperation between objects is performed. The method can be executed by the control device of the object action provided by the embodiment of the invention, the device can be realized by software and/or hardware, the device can be integrated on the control equipment of the object action, and the equipment can be various user terminals or servers.
Referring to fig. 1, the method of the embodiment of the present invention specifically includes the following steps:
s110, a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object are obtained.
Wherein the first motion prediction model may be a model of the first object for predicting a motion of the first object and/or of the remaining objects for which motion is not determined; the action object may be an object for which the first object is in need of action, but it is to be determined as to whether it is really being acted upon; the second object can be an object belonging to the same object team as the first object, i.e. an object that can perform team collaboration with the first object; the second motion prediction model may be a model of the second object for predicting the motion of itself and/or the rest of the objects for which motion is not determined. That is, when performing prediction based on the first motion prediction model, prediction is performed with the first object as a starting point; similarly, when performing prediction based on the second motion prediction model, prediction is performed starting from the second object. In other words, each object may act on itself as a single-thought dimension, determining whether to perform team collaboration to collectively act on action objects by predicting the actions of the remaining action-undetermined objects.
And S120, predicting whether the second object has an action demand on the action object based on the first action prediction model, and if so, predicting whether the action can be successfully carried out when the first object and the second object jointly act on the action object.
The first object has an action demand for the action object, and whether a second object with undetermined action also has an action demand for the action object can be predicted based on the first action prediction model, if so, whether action success can be realized when the first object and the second object act on the action object together can be predicted, and whether team cooperation of the first object and the object is controlled to act on the action object is concerned. On this basis, optionally, if not (i.e., the second object does not have an action requirement on the action object), the remaining objects whose actions are not determined may be taken as the second object, and S120 is performed again. Optionally, if the first object and the second object perform the action on the action object together, and the action cannot be successful, the remaining objects whose actions are not determined may be taken as the second object, and step S120 may be performed again.
And S130, if it is determined that the object combination including the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, predicting whether the second object has an action demand on the action object based on the second action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object jointly act on the action object.
In other words, the object combination may include other objects besides the first object and the second object, and is not specifically limited herein. If it is determined from the prediction result of the first motion prediction model that the combination of objects can successfully act on the motion object, that is, if the objects in the combination of objects are controlled to collectively act on the motion object, if the action is successful, this means that the first object has already determined to act on the motion object, that is, the first object may be an object whose action has already been determined, then the actions of the remaining objects other than the first object may be predicted at this time based on the second motion prediction model. On this basis, optionally, after predicting whether the action can be successfully performed based on the first action prediction model, the method for controlling the action of the object may further include: if so, associating the action object with the first object; if it is determined that an object combination including the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, the method may include: if the object combination comprising the first object and the second object is determined to be capable of successfully acting on the action object according to the prediction result of the first action prediction model, and the correlation result between the action object and the first object is obtained. In other words, the action object is associated with the first object when the object combination can successfully act on the action object. Since the information between the objects is shared, the control device can predict the motion of the objects other than the first object based on the second motion prediction model after acquiring the correlation result between the motion object and the first object.
When the motion is predicted based on the second motion prediction model, it can be predicted whether the second object has a motion requirement on the motion object, and if so, it can be predicted whether the motion can be successfully performed when the first object and the second object move the motion object together. On the basis, optionally, otherwise (namely the second object does not have an action requirement on the action object), the second action prediction model finishes predicting. Alternatively, if it is determined that the action is not successful when the first object and the second object jointly act on the action object, the prediction of the rest of the objects whose actions are not determined can be continued.
The first motion prediction model and the second motion prediction model may be the same or different motion prediction models, and the distinction between the first motion prediction model and the second motion prediction model is made to show which object the prediction operation is based on, and the specific contents of the two are not limited.
And S140, when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object.
When it is determined that the object combination can successfully act on the action object according to the prediction result of the second action prediction model, which means that the first object and the second object both determine that the object combination can successfully act on the action object, the object combination can be controlled to act on the action object. In other words, if at least two objects in the object combination determine that the object combination can successfully act on the action object, the control object combination acts. On the basis, optionally, in order to improve the probability of successful action, when each object in the object combination determines that the object combination can successfully act on the action object, the object combination may be controlled to act.
It should be noted that, in the above technical solution, a single object is used as a single thought dimension, and whether team cooperation is performed is determined through the acquired information of each object. Specifically, an object can predict the action of an object which is not determined by the action of the object belonging to the same object team while assuming the action of the object, and the action of the object is clearly fed back according to the prediction result, so that whether team cooperation is performed or not can be determined according to the clear actions of some objects in the object team, the intelligent degree of each object is improved, and the team cooperation among the objects can be further performed.
According to the technical scheme of the embodiment of the invention, a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object are obtained; whether the second object has an action demand on the action object is predicted based on the first action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object act on the action object together is predicted; if the object combination comprising the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, namely when the first object determines to act on the action object, whether the second object has an action demand on the action object can be predicted again based on the second action prediction model, if so, whether the action can be successfully acted when the first object and the second object commonly act on the action object is predicted; when it is determined that the object combination can successfully act on the action object according to the prediction result of the second action prediction model, the object combination is controlled to act on the action object. According to the technical scheme, the action of the object and/or the action of the object which belongs to the same object team and is not determined is predicted through the action prediction model of the object, and when the object combination comprising the objects respectively corresponding to the at least two action prediction models is predicted by the at least two action prediction models to be capable of successfully acting on the action object, the objects in the object combination are controlled to perform team cooperation to act on the action object, so that the effect of cooperative action among the objects is achieved.
An optional technical solution is that the method for controlling the object motion may further include: determining whether an object which can be successfully moved by the first object independently exists in each candidate object in a preset range of the first object, wherein the preset range comprises a preset movement range or a preset view range; if not, determining the action object from the candidate objects. The preset action range may be a range in which the first object can perform an action, the preset visual field range may be a range in which the first object can observe the remaining objects, and the candidate object may be an object within the preset range. Since the object performing team cooperation is an object that cannot successfully operate by itself, when there is no object that can be successfully operated by the first object by itself in each candidate object, that is, when the first object cannot successfully operate by itself with any one of the candidate objects, an operation object that needs to be operated together with the remaining objects can be determined from each candidate object to perform the subsequent prediction step.
In order to more visually understand the specific implementation process of the above steps, the following will illustrate the attack process in the game-like game. It should be noted that the terms attack, kill, etc. set forth below are alternatives to actions, and are not specific limitations on actions. For example, taking a first object as a robot a (hereinafter referred to as a for short), whether a route is available or not is judged by whether a teammate, an enemy target, an obstacle or the like exists at an adjacent position adjacent to the current position where a is located;
1) if no way is available, performing the most valuable action, specifically, attacking the enemy target which can be successfully killed within the preset attack range, and optionally, successfully killing when the attack force-defensive power of A is larger than the blood volume of the enemy target; if no such enemy target (namely, the enemy target which can be independently killed) exists, the enemy target which is determined to be attacked by the teammates of the own party in the preset attack range can be attacked, and the teammates of the own party are assisted to attack; if no enemy target exists, the enemy target with the lowest blood volume in the preset attack range can be selected for attack. In practical applications, the preset attack range may be a range formed by adjacent positions adjacent to the current position where a is located.
2) If the enemy target can be moved, all the enemy targets in the preset visual field range are traversed, the enemy targets which are locked by the teammates of the party and explicitly attack and can be killed in the current round are excluded, whether the enemy target which can kill the teammates of the party in the current round exists or not is judged on the basis, and if the enemy target exists, the enemy target is determined as the attack target in the current round. In practical applications, the preset visual field range may be a range formed by positions whose distance from the current position where a is located is within a preset distance threshold.
3) If no enemy target that can be attacked is determined through the above steps, whether team cooperation is performed or not can be predicted. Specifically, for an enemy target X within a preset attack range of A, determining team friends (assumed to be a robot B) of which X is within the preset attack range from the team friends of the team, assuming that A can attack X, and predicting whether B can attack X or not through recursion steps 1), 2) and 3) (specifically, predicting through an action prediction model of A); if not, A gives up attacking X, otherwise judges whether B is possible to be killed by X in the process of attacking X; if so, A gives up attacking X, otherwise, when the fact that the combined force of A and B can kill X and cannot be killed reversely in the attacking process is determined through the prediction result of the action prediction model of A, X is taken as the attacking target of A in the current round. In practical application, optionally, each enemy target can be traversed according to the distance between each enemy target located in the preset attack range of the A and the A, and if the enemy target cannot be killed through team cooperation, another enemy target is judged. Optionally, when the number of teammates belonging to the preset attack range of X is at least two, the individual teammates can be predicted one by one.
4) If the attack target of A is determined to be X through the steps, X is associated with A, so that the motion prediction model of B does not predict A with the determined attack target when predicting. In practical applications, optionally, the association between X and a may be implemented by recording the current coordinates of X as the moving coordinates of a, so that the motion prediction model of B can determine which enemy object a wants to attack through the moving coordinates recorded in a.
The above examples can effectively play the effects of team cooperation and maximum possession, thereby improving the game winning rate and increasing the interactivity and the interest of the game.
In order to more visually understand the specific implementation process of the above steps, the following description will take the object carrying task as an example. For example, when it is determined that the robot a cannot carry an article X by itself, it is assumed that a carries X, and whether the robot B needs to carry X is predicted based on the motion prediction model of a, and if so, whether the resultant force of a and B can successfully carry X is predicted; if so, associating X with A; after the correlation result of X and A is obtained, whether B needs to carry X or not is predicted based on the motion prediction model of B, and if yes, whether the two people combining A and B can successfully carry X or not is predicted; if so, controlling A and B to carry X together, thereby achieving the effect of completing the carrying task of X by team cooperation of A and B.
Example two
Fig. 2 is a flowchart of a method for controlling an object action according to a second embodiment of the present invention. The present embodiment is optimized based on the above technical solutions. In this embodiment, optionally, after predicting whether the action can be successfully performed based on the first action prediction model, the method for controlling the action of the object may further include: if not, acquiring a third object belonging to the same object team as the first object, and predicting whether the third object has an action demand on the action object; if yes, whether the action can be successfully performed when the first object, the second object and the third object perform actions on the action object together is predicted; accordingly, if it is determined that an object combination including the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, it may include: if it is determined that an object combination including the first object, the second object, and the third object can successfully act on the action object according to a prediction result of the first action prediction model; accordingly, after predicting whether the action can be successfully performed based on the second action prediction model, the method for controlling the action of the object may further include: if not, predicting whether the third object has an action requirement on the action object; if yes, predicting whether the action can be successfully carried out when the first object, the second object and the third object jointly carry out the action on the action object; accordingly, when it is determined that the object combination can successfully act on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object may include: when the object combination is determined to be successfully moved to the action object according to the prediction result of the second action prediction model, whether the third object has an action demand on the action object is predicted based on a third action prediction model of the third object, and if so, whether the action is successfully moved when the first object, the second object and the third object commonly move the action object is predicted; when it is determined that the object combination can successfully operate on the operation object based on the prediction result of the third operation prediction model, the object combination is controlled to operate on the operation object. The same or corresponding terms as those in the above embodiments are not explained in detail herein.
Referring to fig. 2, the method of the present embodiment may specifically include the following steps:
s210, a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object are obtained.
S220, whether the second object has an action demand on the action object is predicted based on the first action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object act on the action object together is predicted.
On this basis, optionally, if not (i.e., the second object does not have an action requirement for the action object), the remaining objects with undetermined actions in the same object team are taken as the second object, and the step returns to the step S220.
And S230, if not, acquiring a third object belonging to the same object team as the first object, and predicting whether the third object has an action demand on the action object based on the first action prediction model.
The process of predicting the motion of the third object based on the first motion prediction model is similar to the process of predicting the motion of the second object, and is not described herein again.
On the basis, optionally, if yes (that is, when the first object and the second object act on the action object together, the action can be successfully performed), the first action prediction model finishes the prediction, and predicts whether the second object has an action demand on the action object based on the second action prediction model, and if yes, predicts whether the action can be successfully performed when the first object and the second object act on the action object together.
And S240, if so, predicting whether the action can be successfully performed when the first object, the second object and the third object jointly perform the action on the action object based on the first action prediction model.
On this basis, optionally, if not (that is, if the action cannot be successfully performed when the first object, the second object, and the third object collectively perform an action on the action object), the remaining objects in the object team whose actions are not determined may be regarded as the third object, and the process returns to perform S230.
And S250, if it is determined that the object combination comprising the first object, the second object and the third object can successfully act on the action object according to the prediction result of the first action prediction model, predicting whether the second object has an action requirement on the action object based on the second action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object act on the action object together.
In other words, the object combination may include other objects in the object team besides the first object, the second object and the third object, and is not specifically limited herein. Since the first object has determined to act on the action object, the actions of the remaining objects other than the first object may be predicted based on the second action prediction model, such as predicting the actions of the second object and the third object within the combination of objects.
And S260, if not, predicting whether the third object has an action demand on the action object or not based on the second action prediction model.
And S270, if so, predicting whether the action can be successfully performed when the first object, the second object and the third object jointly perform the action on the action object based on the second action prediction model.
And S280, when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the second action prediction model, predicting whether the third object has an action demand on the action object or not based on the third action prediction model of the third object, and if so, predicting whether the action is capable of successfully acting when the first object, the second object and the third object commonly act on the action object or not.
The prediction process of the third motion prediction model is similar to the prediction process of the second motion prediction model, and is used for predicting the motion of an object with undetermined motion in the object team, and is not repeated here.
And S290, when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the third action prediction model, controlling the object combination to act on the action object.
In order to better understand the specific implementation process of the above technical solution, the following description takes the first object a, the second object B, the third object C, and the action object X as an example. For example, assuming that A acts on X, predicting whether B wants to act on X based on a first action prediction model, and if so, predicting whether the resultant force of A and B can successfully act on X; if not, continuing to predict whether C wants to act on X based on the first action prediction model, and if so, predicting A, B whether the resultant force of C can succeed in acting X; if so, then A is associated with X. Furthermore, since A already defines the action X, whether B wants to act on X is predicted based on the second action prediction model, and if so, whether the resultant force of A and B can successfully act on X is predicted; if not, continuing to predict whether C wants to act on X based on the second action prediction model, and if so, predicting A, B whether the resultant force of C can succeed in acting X; if so, then B is associated with X. Further, since both a and B already specify action X, predicting, based on the third action prediction model, whether C wants to act on X, and if so, predicting A, B whether the resultant force of C can successfully act on X; if so, then C is associated with X and controls A, B and C to act together with X, thereby achieving the effect of team cooperation of A, B and C.
According to the technical scheme of the embodiment of the invention, when the combined force of the first object and the second object is determined to be incapable of successfully acting on the action object according to the prediction result of the first action prediction model, the action of a third object belonging to the same object team can be predicted; similarly, in addition to predicting the motion of the second object based on the second motion prediction model, the motion of a third object may be predicted; similarly, the action of the third object can be predicted based on the third action prediction model, so that the effect of accurately determining whether to perform team cooperation among a plurality of objects is achieved through the mutual cooperation of the steps.
EXAMPLE III
Fig. 3 is a flowchart of a method for controlling an object action according to a third embodiment of the present invention. The present embodiment is optimized based on the above technical solutions. In this embodiment, optionally, when the first object and the second object jointly act on the action object based on the prediction performed by the first action prediction model and/or the second action prediction model, whether the action is successful may include: predicting a second preceding point where a second object moves when the second object moves on the action object from the preceding points corresponding to the action object, wherein the preceding point comprises a position adjacent to an action position where the action object is located; whether the action can be successfully performed is predicted when the first object is at a first advance point and the second object is commonly performed on the action object at a second advance point, wherein the first advance point comprises an advance point where the first object is located when the action object is performed on the action object. The same or corresponding terms as those in the above embodiments are not explained in detail herein. It should be noted that the following description will be made by taking the example of performing the above steps based on the first motion prediction model, but the following description may also be performed based on the second motion prediction model or both the first motion prediction model and the second motion prediction model, and is not limited in detail here.
Referring to fig. 3, the method of this embodiment may specifically include the following steps:
s310, a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object are obtained.
And S320, predicting whether the second object has an action demand on the action object based on the first action prediction model, if so, predicting a second advance point where the second object moves when the action object is moved from each advance point corresponding to the action object, wherein the advance point comprises a position adjacent to the action position where the action object is located.
The first preceding point may be a position adjacent to the operation position where the operation object is located, and the second preceding point may be a preceding point where the second object is located when the operation object is operated among the preceding points.
And S330, predicting whether the action can be successfully performed when the first object acts on the action object at a first advance point and the second object acts on the action object at a second advance point together based on the first action prediction model, wherein the first advance point comprises an advance point at which the first object acts on the action object.
In practical applications, optionally, the first advance point and the second advance point may form a surrounding situation on the action object after the first object moves to the first advance point and the second object moves to the second advance point, so that the action object cannot escape as a target, and the first advance point may be predicted by the first action prediction model. In the prediction based on the first motion prediction model, specifically, it may be predicted whether or not the motion is successful when the first object moves the motion object at the first preceding point and the second object moves the motion object at the second preceding point.
And S340, if it is determined that the object combination including the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, predicting whether the second object has an action demand on the action object based on the second action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object commonly act on the action object.
In practical applications, optionally, since the actions of the first prior point and the first object are determined, when performing prediction based on the second action prediction model, in addition to predicting whether the second object has an action requirement for the action object and whether the action can be successfully performed when the first object and the second object act on the action object together, the second prior point and/or specifically predicting whether the action can be successfully performed when the first object is at the first prior point and the second object acts on the action object together at the second prior point, and the like, which are not specifically limited herein.
And S350, when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object.
According to the technical scheme of the embodiment of the invention, whether the action can be successfully carried out when the first object acts on the action object at the first advance point and the second object acts on the action object at the second advance point together is predicted by predicting the first advance point and the second advance point, so that the effect of accurately determining whether team cooperation among the objects is carried out is achieved.
On the basis, an optional technical scheme is that the moving distance between the first object and the first advance point is smaller than the moving distance between the second object and the second advance point, the position adjacent to the first advance point and not adjacent to the action position is taken as a first position, and the position adjacent to the second advance point and not adjacent to the action position is taken as a second position; the action of the control object combination on the action object can comprise: controlling the first object to move towards the first-line point and controlling the second object to move towards the second first-line point; when the first object moves to any first position, determining whether the first object needs to be controlled to dodge the action object; if yes, determining a dodging position from the first positions, and controlling the first object to move towards the dodging position; when the second object moves to any second position, controlling the first object to move to the first advance point and controlling the second object to move to the second advance point; when the first object reaches the first advance point and the second object reaches the second advance point, the object combination including the first object and the second object is controlled to move the motion object.
The first object is controlled to move towards the first advance point and the second object is controlled to move towards the second advance point, and as the moving distance between the first object and the first advance point is smaller than the moving distance between the second object and the second advance point, when the moving speeds of the first object and the second object are the same, the first object moves to the first advance point first and the second object moves to the second advance point later. On the basis, the position adjacent to the first advance point and not adjacent to the action position is taken as a first position, when the first object moves to any first position, the first object moves to the first advance point once again, but the second object does not move to the second advance point. Since the action object can act on the first object at the same time, in order to avoid the situation that the first object is damaged due to the action of the action object on the first object, when the first object reaches any first position, whether the first object needs to be controlled to avoid the action object can be determined according to whether the action object is likely to damage the first object after the first object moves to the first advance point. If so (i.e., if dodging is required), determining a dodging position from each first position, and controlling the first object to move towards the dodging position, wherein the dodging position can be the first position which can enable the first object to move to the first advance point quickly and can not enable the action object to act on the first object. Furthermore, a position adjacent to the second preceding point and not adjacent to the action position is used as a second position, that is, the second position can be a position where the second object can move to the second preceding point quickly, when the second object moves to any second position, the first object is controlled to move to the first preceding point and the second object is controlled to move to the second preceding point, and when the first object reaches the first preceding point and the second object reaches the second preceding point, each object in the control object combination performs team cooperation to act on the action object, so that the situation that the object which reaches the preceding point first before team cooperation is damaged by the object can be avoided.
For better understanding the concepts of the preceding point, the first preceding point, the second preceding point, the first position, the dodge position, etc., as shown in fig. 4, each square represents a position, and if 0 is the action position, 1 is the preceding point; assuming that bold 1 is the first leading point, then 2 is the first position; assuming a slant and underlining 1 is the second leading point, then 3 is the second position.
In any of the above technical solutions, the action object includes an attack object, that is, the action performed by each object in the object combination on the attack object is an attack, and the method for controlling the action of the object may further include: if the object combination cannot attack the attack object successfully according to the prediction result of the first action prediction model, acquiring candidate acquisition articles in the preset visual field range of the first object, and determining whether the candidate acquisition articles have the articles to be acquired which need to be acquired or not according to the degree and the acquisition distance of the first object to the candidate acquisition articles; if so, acquiring the chasing object of the first object, and determining and controlling the first object to chase the chasing object or acquire the object to be acquired according to the chasing distance between the first object and the chasing object, the acquisition distance between the first object and the object to be acquired, and the priority coefficient of the object to be acquired relative to the chasing object.
The attack object may be an object that the first object can attack without moving, and the pursuit object may be an object that the first object can attack after moving. The candidate acquired article may be an article that can be acquired within a preset visual field range, for each candidate acquired article, the required degree may be a degree that the first object needs the candidate acquired article, the acquisition distance may be a moving distance between the first object and the candidate acquired article, whether the first object really needs to acquire the candidate acquired article at the current time may be determined according to the required degree and the acquisition distance, and if yes, the candidate acquired article may be taken as the article to be acquired. Under the condition that the article to be acquired exists in each candidate acquired article, the first object needs to be moved to the vicinity of the article to be acquired before the article to be acquired is acquired, and in the moving process of the first object, the situation that the rest objects move to the vicinity of the article to be acquired before the article to be acquired is likely to occur, namely, the process of acquiring the article to be acquired is risky. Therefore, even if there is an article to be acquired, the chasing object of the first object can be acquired first, and similarly, since the first object needs to move to the vicinity of the chasing object before attacking the chasing object, there is a high possibility that the remaining objects move to the vicinity of the chasing object first and then defeat the chasing object in the process of moving the first object, that is, there is a risk in the process of chasing the chasing object. For this reason, which of the pursuit risk and the acquisition risk is smaller may be determined according to the pursuit distance between the first object and the pursuit object, the acquisition distance between the first object and the object to be acquired, and the priority coefficient of the object to be acquired relative to the pursuit object, and it is further determined whether to control the first object to pursue the pursuit object or acquire the object to be acquired.
In order to more visually understand the specific implementation process of the above steps, the following description will proceed with the example of the confrontational game in the above example. For example, after determining that the first object cannot successfully kill each enemy target within the preset attack range, the first object may be controlled to perform a treasure hunting operation (i.e., an operation of acquiring an article to be acquired, where the hunting may be understood as acquisition), and the enemy target may also be referred to as an attack target. In particular, the method comprises the following steps of,
5) the method comprises the steps that according to the degree of need for searching for the treasure and the searching distance of candidates in a preset visual field range, priority ranking is conducted on the candidates for searching for the treasure, the treasure to be searched which is highest in priority, does not affect entering a safety zone on time and is not locked and searched by teammates of the current party is determined from the candidates for searching for the treasure, wherein the blood volume of an object in the safety zone cannot be reduced, and therefore the situation that life is ended due to the fact that light falls off from the blood volume when the treasure is searched outside the safety zone is probably caused; if the treasure to be searched is not found, jumping to the step 7), otherwise executing the step 6).
6) And (4) determining the chasing target of the A according to the steps 7) -8), and further determining whether to chase the chasing target or search the treasure to be searched according to the chasing distance, the searching distance and the priority coefficient.
7) The enemy targets in the preset visual field range A are sorted in an ascending order according to blood volume, defensive power and moving distance, and the purpose is to preferentially chase the enemy target with the lowest defensive power and the lowest moving distance and blood volume, namely to chase the enemy target which is easiest to chase and can be defeated;
8) it is determined which enemy target to target as a pursuit target. Specifically, each enemy target is processed in a traversing manner;
enemy objects that are not in the safe zone when the safe zone is large are excluded because it is not necessary to chase enemy objects outside the safe zone when the safe zone is large, at the cost of reducing the amount of blood. Excluding enemy targets that have been locked by my teammates and can be killed. For the enemy targets which are not excluded, the following operations are carried out in sequence according to the priority levels of the enemy targets:
supposing that A chases a certain enemy target X, predicting a first advance point of A and whether the A can independently kill X when moving to the first advance point; if not, predicting whether B can chase after X, if so, predicting whether B's second prior point, and whether A can jointly kill X when moving to the first prior point and B moves to the second prior point, wherein the first prior point and the second prior point can be prior points which can form a surrounding situation for X. If A and B can not kill X together, predicting the action of C, wherein the prediction process is similar to B and is not repeated. Taking the example that the fact that A and B can hit and kill X together is determined according to the prediction of each party, controlling A to move towards a first advance point and controlling B to move towards a second advance point; when A has moved to any first position and B has not moved to any second position, determining whether A needs to be controlled to avoid the situation that A moving to the first advance point first is attacked by X when B has not moved to the second advance point; if so, the control A moves to a dodging position in each first position, or the control A walks around each dodging position, and when the control B moves to any second position, the control A moves to a first antecedent point and the control B moves to a second antecedent point (namely, the control A and the control B enter the antecedent points together after all the control A and the control B are aligned, and at most one team friend of the party is attacked by the control X), and the control A and the control B jointly attack the enemy target after the control A and the control B respectively reach the respective antecedent points.
On the basis, taking a as an example, the moving path of a can be determined according to the moving coordinates of a and based on the a-x algorithm. Optionally, if the moving coordinate cannot be determined (that is, the attack target, the pursuit target, or the treasure to be searched cannot be determined) through each step, if the moving coordinate is still outside the safety zone, the moving coordinate may enter the safety zone based on the shortest path; if the mobile terminal is in the safety zone, the mobile terminal can go to the central point of the safety zone; if the safety zone does not exist, the user can go to the origin coordinates; etc., and are not specifically limited herein.
Example four
Fig. 5 is a block diagram of a control apparatus for object motion according to a fourth embodiment of the present invention, which is configured to execute a control method for object motion according to any of the embodiments described above. The apparatus and the method for controlling the object motion in the embodiments described above belong to the same inventive concept, and details that are not described in detail in the embodiments of the apparatus for controlling the object motion may refer to the embodiments of the method for controlling the object motion described above. Referring to fig. 5, the apparatus may specifically include: a model acquisition module 410, a first action prediction module 420, a second action prediction module 430, and an action control module 440.
The model obtaining module 410 is configured to obtain a first motion prediction model and a motion object of a first object, and a second motion prediction model of a second object belonging to the same object team as the first object;
the first action prediction module 420 is configured to predict whether the second object has an action demand for the action object based on the first action prediction model, and if so, predict whether the action can be successfully performed when the first object and the second object act on the action object together;
a second action prediction module 430, configured to predict whether the second object has an action demand for the action object based on the second action prediction model if it is determined that the object combination including the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, and predict whether the action can be successfully performed when the first object and the second object act on the action object together if the second object has the action demand for the action object;
and the action control module 440 is used for controlling the object combination to act on the action object when the object combination is determined to be able to act on the action object successfully according to the prediction result of the second action prediction model.
Optionally, the apparatus for controlling an action of the object may further include:
the first action demand prediction module is used for acquiring a third object belonging to the same object team with the first object and predicting whether the third object has an action demand on the action object or not after predicting whether the action can be successfully performed based on the first action prediction model or not and if not, predicting whether the third object has the action demand on the action object or not;
the first action success prediction module is used for predicting whether action success can be achieved when the first object, the second object and the third object jointly act on the action object if the action success prediction module is used for predicting whether action success can be achieved;
the second action prediction module 430 may include:
a second motion success prediction unit for, if it is determined from the prediction result of the first motion prediction model that an object combination including the first object, the second object, and the third object can be successfully moved for the motion object;
the apparatus for controlling an operation of the object may further include:
the second action demand prediction module is used for predicting whether the action can be successfully performed or not based on the second action prediction model, and if not, predicting whether the action demand exists on the action object or not by the third object;
the second action success prediction module is used for predicting whether the action can be successfully performed when the first object, the second object and the third object perform actions on the action object together if the action is performed on the action object;
the action control module 440 is specifically configured to:
when the object combination is determined to be successfully moved to the action object according to the prediction result of the second action prediction model, whether the third object has an action demand on the action object is predicted based on a third action prediction model of the third object, and if so, whether the action is successfully moved when the first object, the second object and the third object commonly move the action object is predicted;
and when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the third action prediction model, controlling the object combination to act on the action object.
Optionally, the apparatus for controlling an action of the object may further include:
the object association module is used for associating the action object with the first object if the action can be successfully predicted based on the first action prediction model;
the second action prediction module 430 may include:
and a first action success prediction unit for, if it is determined from the prediction result of the first action prediction model that an object combination including the first object and the second object can be successfully acted on the action object, acquiring an association result between the action object and the first object.
Optionally, the first action prediction module 420 and/or the second action prediction module 430 may include:
a second preceding point prediction unit configured to predict, from among preceding points corresponding to the action object, a second preceding point at which the second object is located when the action object is moved, the preceding points including positions adjacent to an action position at which the action object is located;
and an action success prediction unit for predicting whether the action can be successfully performed when the first object is at a first advance point and the second object is commonly operated on the action object at a second advance point, wherein the first advance point comprises an advance point where the first object is located when the action object is operated.
On the basis, optionally, the moving distance between the first object and the first preceding point is smaller than the moving distance between the second object and the second preceding point, a position adjacent to the first preceding point and not adjacent to the action position is taken as a first position, and a position adjacent to the second preceding point and not adjacent to the action position is taken as a second position;
the motion control module 440 may include:
a first movement control unit for controlling the first object to move towards the first preceding point and controlling the second object to move towards the second preceding point;
the dodging determination unit is used for determining whether the first object needs to be controlled to dodge the action object when the first object moves to any first position;
a second movement control unit for determining a dodging position from the first positions and controlling the first object to move towards the dodging position if the first object is detected to be in the dodging position;
a third movement control unit for controlling the first object to move to the first preceding point and the second object to move to the second preceding point when the second object moves to any one of the second positions;
and an action control unit for controlling an object combination including the first object and the second object to act on the action object when the first object reaches the first leading point and the second object reaches the second leading point.
Optionally, the action object includes an attack object, and the control device for controlling the action of the object may further include:
the to-be-acquired article determining module is used for acquiring candidate acquired articles in a preset visual field range of the first object if the object combination is determined to be incapable of successfully attacking the attack object according to the prediction result of the first action prediction model, and determining whether the to-be-acquired articles to be acquired exist in the candidate acquired articles according to the degree and the acquisition distance of the first object to the candidate acquired articles;
and the object action determining module is used for acquiring the chasing object of the first object if the first object is the chasing object, and determining to control the first object to chase the chasing object or acquire the object to be acquired according to the chasing distance between the first object and the chasing object, the acquisition distance between the first object and the object to be acquired and the priority coefficient of the object to be acquired relative to the chasing object.
Optionally, the apparatus for controlling an action of the object may further include:
the object existence determining module is used for determining whether an object which can be successfully moved by the first object independently exists in each candidate object in a preset range of the first object, wherein the preset range comprises a preset movement range or a preset view range;
and the action object determining module is used for determining the action object from the candidate objects if the candidate objects are not the same as the action object.
In the control apparatus for object motion provided in the fourth embodiment of the present invention, a model obtaining module obtains a first motion prediction model and a motion object of a first object, and a second motion prediction model of a second object belonging to the same object team as the first object; whether the second object has an action demand on the action object is predicted through the first action prediction module based on the first action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object act on the action object together is predicted; if the second action prediction module determines that the action of the action object can be successfully performed on the object combination comprising the first object and the second object according to the prediction result of the first action prediction model, namely when the first object determines to perform the action on the action object, whether the second object has an action demand on the action object can be predicted again based on the second action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object perform the action on the action object together is predicted; and when the motion control module determines that the object combination can successfully act on the motion object according to the prediction result of the second motion prediction model, controlling the object combination to act on the motion object. The above-described device predicts the motion of a certain object and/or an object whose motion is not determined by the motion of the object belonging to the same object group as the certain object by the motion prediction model of the certain object, and controls each object in the object groups to perform group cooperation to move the motion object when the at least two motion prediction models predict that the object groups including the objects corresponding to the at least two motion prediction models can successfully move the motion object, thereby achieving the effect of cooperative motion between the objects.
The control device for the object motion provided by the embodiment of the invention can execute the control method for the object motion provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the control device for object actions, the included units and modules are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
EXAMPLE five
Fig. 6 is a schematic structural diagram of a control apparatus for object actions according to a fifth embodiment of the present invention, as shown in fig. 6, the apparatus includes a memory 510, a processor 520, an input device 530, and an output device 540. The number of processors 520 in the device may be one or more, and one processor 520 is taken as an example in fig. 6; the memory 510, processor 520, input device 530, and output device 540 in the apparatus may be connected by a bus or other means, such as by bus 550 in fig. 6.
The memory 510 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the control method of the object motion in the embodiment of the present invention (for example, the model obtaining module 410, the first motion prediction module 420, the second motion prediction module 430, and the motion control module 440 in the control device of the object motion). The processor 520 executes various functional applications of the device and data processing, that is, implements the control method of the object motion described above, by executing software programs, instructions, and modules stored in the memory 510.
The memory 510 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 510 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 510 may further include memory located remotely from processor 520, which may be connected to devices through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the device. The output device 540 may include a display device such as a display screen.
EXAMPLE six
An embodiment of the present invention provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for controlling an object action, the method including:
acquiring a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object;
whether the second object has an action demand on the action object is predicted based on the first action prediction model, and if yes, whether the action can be successfully performed when the first object and the second object act on the action object together is predicted;
if the object combination comprising the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, predicting whether the second object has an action demand on the action object based on the second action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object jointly act on the action object;
and when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the method for controlling object actions provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. With this understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for controlling an action of an object, comprising:
acquiring a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object;
predicting whether the second object has an action demand on the action object based on the first action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object act on the action object together;
if it is determined that an object combination including the first object and the second object can successfully act on the action object according to a prediction result of the first action prediction model, predicting whether the second object has an action demand on the action object based on the second action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object act on the action object together;
and when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object.
2. The method of claim 1, wherein after predicting whether the action can be successfully performed based on the first action prediction model, further comprising:
if not, acquiring a third object belonging to the same object team as the first object, and predicting whether the third object has an action requirement on the action object;
if yes, predicting whether the action can be successfully performed when the first object, the second object and the third object perform actions on the action object together;
the step of, if it is determined from the prediction result of the first motion prediction model that the combination of the objects including the first object and the second object can successfully act on the motion object, including:
if it is determined that an object combination including the first object, the second object, and the third object can be successfully acted on the action object according to a prediction result of the first action prediction model;
after predicting whether the action can be successfully performed based on the second action prediction model, the method further comprises:
if not, predicting whether the third object has an action requirement on the action object;
if yes, predicting whether the action can be successfully performed when the first object, the second object and the third object perform actions on the action object together;
when it is determined that the object combination can successfully act on the action object according to the prediction result of the second action prediction model, controlling the object combination to act on the action object includes:
when the combination of the objects is determined to be successful in acting on the action object according to the prediction result of the second action prediction model, predicting whether the third object has action requirement on the action object or not based on a third action prediction model of the third object, and if so, predicting whether the action is successful or not when the first object, the second object and the third object act on the action object together;
and when the object combination is determined to be capable of successfully acting on the action object according to the prediction result of the third action prediction model, controlling the object combination to act on the action object.
3. The method of claim 1, wherein after predicting whether the action can be successfully performed based on the first action prediction model, further comprising:
if so, associating the action object with the first object;
the step of, if it is determined from the prediction result of the first motion prediction model that the combination of the objects including the first object and the second object can successfully act on the motion object, including:
and if it is determined that the object combination comprising the first object and the second object can successfully act on the action object according to the prediction result of the first action prediction model, acquiring the association result between the action object and the first object.
4. The method of claim 1, wherein predicting whether the action can be successfully performed when the first object and the second object jointly perform the action on the action object comprises:
predicting a second preceding point at which the second object is located when the motion object is moved, from among preceding points corresponding to the motion object, wherein the preceding points include positions adjacent to a motion position at which the motion object is located;
predicting whether the action can be successfully performed when the first object moves on the action object at a first advance point and the second object moves on the action object at the second advance point, wherein the first advance point comprises the advance point where the first object moves on the action object.
5. The method according to claim 4, wherein a moving distance between the first object and the first preceding point is smaller than a moving distance between the second object and the second preceding point, and a position adjacent to the first preceding point and not adjacent to the action position is taken as a first position, and a position adjacent to the second preceding point and not adjacent to the action position is taken as a second position;
the controlling the object combination to act on the action object comprises:
controlling the first object to move toward the first precedent point and controlling the second object to move toward the second precedent point;
when the first object moves to any one first position, determining whether the first object needs to be controlled to avoid the action object;
if yes, determining a dodging position from the first positions, and controlling the first object to move towards the dodging position;
when the second object moves to any one of the second positions, controlling the first object to move to the first advance point and controlling the second object to move to the second advance point;
controlling the object combination including the first object and the second object to act on the action object when the first object reaches the first prior point and the second object reaches the second prior point.
6. The method of claim 1, wherein the action object comprises an attack object, the method further comprising:
if the object combination cannot attack the attack object successfully according to the prediction result of the first action prediction model, acquiring candidate acquisition articles in a preset visual field range of the first object, and determining whether the candidate acquisition articles have to-be-acquired articles according to the degree and the acquisition distance of the first object to the candidate acquisition articles;
if so, acquiring an overtaking object of the first object, and determining to control the first object to overtake the overtaking object or acquire the object to be acquired according to an overtaking distance between the first object and the overtaking object, the acquisition distance between the first object and the object to be acquired, and a priority coefficient of the object to be acquired relative to the overtaking object.
7. The method of claim 1, further comprising:
determining whether an object which can be independently and successfully moved by the first object exists in each candidate object in a preset range of the first object, wherein the preset range comprises a preset movement range or a preset view range;
if not, determining the action object from each candidate object.
8. An apparatus for controlling an operation of an object, comprising:
the model acquisition module is used for acquiring a first action prediction model and an action object of a first object and a second action prediction model of a second object belonging to the same object team with the first object;
the first action prediction module is used for predicting whether the second object has an action demand on the action object based on the first action prediction model, and if so, predicting whether the action can be successfully performed when the first object and the second object act on the action object together;
a second action prediction module, configured to predict whether the second object has an action demand for the action object based on the second action prediction model if it is determined that an object combination including the first object and the second object can successfully act on the action object according to a prediction result of the first action prediction model, and predict whether the action can be successfully performed when the first object and the second object act on the action object together if the second object has the action demand for the action object;
and the action control module is used for controlling the object combination to act on the action object when the object combination is determined to act on the action object successfully according to the prediction result of the second action prediction model.
9. An apparatus for controlling an action of an object, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of controlling the actions of an object as recited in any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a method of controlling an action of an object according to any one of claims 1 to 7.
CN202111260109.5A 2021-10-28 2021-10-28 Object action control method, device, equipment and storage medium Pending CN113975799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111260109.5A CN113975799A (en) 2021-10-28 2021-10-28 Object action control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111260109.5A CN113975799A (en) 2021-10-28 2021-10-28 Object action control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113975799A true CN113975799A (en) 2022-01-28

Family

ID=79743092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111260109.5A Pending CN113975799A (en) 2021-10-28 2021-10-28 Object action control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113975799A (en)

Similar Documents

Publication Publication Date Title
CN104942807B (en) Method for capturing targets by aid of multiple robots on basis of extensive cooperative games
CN112182977B (en) Control method and system for unmanned cluster cooperative game countermeasure
Synnaeve et al. A Bayesian model for RTS units control applied to StarCraft
Pozanco Lancho et al. Counterplanning using goal recognition and landmarks
Sichkar Reinforcement learning algorithms in global path planning for mobile robot
CN112807681B (en) Game control method, game control device, electronic equipment and storage medium
CN112633519B (en) Man-machine antagonistic action prediction method, device, electronic equipment and storage medium
Schrum et al. Human-like combat behaviour via multiobjective neuroevolution
CN111589166A (en) Interactive task control, intelligent decision model training methods, apparatus, and media
Wirth et al. An influence map model for playing Ms. Pac-Man
CN112749806A (en) Battlefield situation assessment method, terminal equipment and storage medium
CN111899285B (en) Method and device for determining tracking track of target object and storage medium
CN112651486A (en) Method for improving convergence rate of MADDPG algorithm and application thereof
Jaidee et al. Learning and reusing goal-specific policies for goal-driven autonomy
CN111723931A (en) Multi-agent confrontation action prediction method and device
Sheikh et al. Learning distributed cooperative policies for security games via deep reinforcement learning
CN111931016B (en) Situation evaluation method of reliability transmission algorithm based on root node priority search
CN113975799A (en) Object action control method, device, equipment and storage medium
Pourmehr et al. An overview on opponent modeling in RoboCup soccer simulation 2D
CN117085314A (en) Auxiliary control method and device for cloud game, storage medium and electronic equipment
Samsonovich et al. Character-oriented narrative goal reasoning in autonomous actors
Zhang et al. Multi-robot cooperation strategy in game environment using deep reinforcement learning
Brendan Cook et al. The future of artificial intelligence in ISR Operations
CN111151007B (en) Object selection method, device, terminal and storage medium
CN116720583B (en) Observation state interpretation method and device based on probability cause tracing reasoning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination