CN116943206A - Method and device for determining moving path, storage medium and electronic equipment - Google Patents
Method and device for determining moving path, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN116943206A CN116943206A CN202310121996.0A CN202310121996A CN116943206A CN 116943206 A CN116943206 A CN 116943206A CN 202310121996 A CN202310121996 A CN 202310121996A CN 116943206 A CN116943206 A CN 116943206A
- Authority
- CN
- China
- Prior art keywords
- target
- point
- obstacle
- virtual
- target virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000004888 barrier function Effects 0.000 claims abstract description 18
- 230000015654 memory Effects 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 23
- 238000010586 diagram Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 16
- 230000000694 effects Effects 0.000 description 15
- 238000012545 processing Methods 0.000 description 12
- 238000013473 artificial intelligence Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 4
- 238000012876 topography Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000009182 swimming Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009194 climbing Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/56—Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/65—Methods for processing data by generating or executing the game program for computing the condition of a game character
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides a method and a device for determining a moving path, a storage medium and electronic equipment. Wherein the method comprises the following steps: determining a first moving path of a target virtual object in a target virtual scene from a current position point to a target position point, wherein at least one road point is included between the current position point and the target position point on the first moving path; obtaining barrier information corresponding to each virtual barrier object in a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual barrier object is configured with a unique object identifier in a target virtual scene; under the condition that a target virtual obstacle object exists between the current position and the reference road point according to the obstacle information, selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object; and updating the first moving path by using the target obstacle avoidance points to obtain a second moving path. The method and the device solve the technical problem that the efficiency of searching the moving path is low in the related art.
Description
Technical Field
The present invention relates to the field of computers, and in particular, to a method and apparatus for determining a movement path, a storage medium, and an electronic device.
Background
In the existing 3D open world game, when a virtual object controlled by artificial intelligence has a newly added obstacle in a moving path in the moving process, thereby obstructing the advancing path of the moving process of the virtual object, the related technology generally adopts reconstruction of navigation data on the basis of an original map, and the original navigation is split again to form a new navigation grid so that the virtual object can avoid obstacle and seek paths based on the new navigation data, but the reconstruction of the navigation data not only needs to consume a larger memory, but also needs a large amount of time to complete the reconstructing process, so the related technology has the technical problem of lower efficiency of searching the moving path.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining a moving path, a storage medium and electronic equipment, which at least solve the technical problem of low efficiency of searching the moving path in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a method for determining a movement path, including: determining a first moving path of a target virtual object in a target virtual scene from a current position to a target position, wherein at least one road point is included between the current position and the target position on the first moving path; obtaining obstacle information corresponding to each virtual obstacle object in a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual obstacle object is configured with a unique object identifier in the target virtual scene; selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object when determining that the target virtual obstacle object exists between the current position and a reference road point according to the obstacle information, wherein the target virtual object is avoided when moving to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position when the target virtual object moves along the first moving path; and updating the first moving path by using the target obstacle avoidance point to obtain a second moving path.
According to another aspect of the embodiment of the present invention, there is also provided a device for determining a movement path, including: a determining unit, configured to determine a first movement path of a target virtual object in a target virtual scene from a current location to a target location, where at least one waypoint is included between the current location and the target location on the first movement path; an obtaining unit, configured to obtain obstacle information corresponding to each virtual obstacle object in a target interval range separated from the current locus by a target threshold distance, where each virtual obstacle object is configured with a unique object identifier in the target virtual scene; a selecting unit configured to select a target obstacle avoidance point based on a boundary point that matches the target virtual obstacle object when it is determined that a target virtual obstacle object exists between the current location and a reference waypoint according to the obstacle information, where the target virtual object will avoid the target virtual obstacle object when moving to the target obstacle avoidance point, and the reference waypoint is a next waypoint adjacent to the current location when the target virtual object moves along the first movement path; and the updating unit is used for updating the first moving path by using the target obstacle avoidance point to obtain a second moving path.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the above-described method of determining a movement path when run.
According to yet another aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the determination method of the movement path as above.
According to still another aspect of the embodiments of the present application, there is also provided an electronic apparatus including a memory in which a computer program is stored, and a processor configured to execute the above-described method of determining a moving path by the above-described computer program.
In the embodiment of the application, a first moving path of a target virtual object in a target virtual scene from a current locus to a target locus is determined, wherein at least one road point is included between the current locus and the target locus on the first moving path; obtaining obstacle information corresponding to each virtual obstacle object in a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual obstacle object is configured with a unique object identifier in the target virtual scene; selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object when determining that the target virtual obstacle object exists between the current position and a reference road point according to the obstacle information, wherein the target virtual object is avoided when moving to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position when the target virtual object moves along the first moving path; in the method of the application, the obstacle information corresponding to the virtual obstacle object is acquired, and the target obstacle avoidance point is selected based on the obstacle information, so that the moving path of the target virtual object which is controlled by artificial intelligence is corrected quickly only by determining the target obstacle avoidance point without reconstruction under the condition of adding the new obstacle object, the problem of lower moving path determination efficiency in the related art is solved, and the technical effect of improving the moving path determination efficiency is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment of an alternative method of determining a path of movement according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of determining a path of movement according to an embodiment of the application;
FIG. 3 is a schematic diagram of an alternative method of determining a path of movement according to an embodiment of the application;
FIG. 4 is a schematic diagram of another alternative method of determining a path of movement according to an embodiment of the application;
FIG. 5 is a schematic illustration of yet another alternative method of determining a path of movement according to an embodiment of the application;
FIG. 6 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the present application;
FIG. 7 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the application;
FIG. 8 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the application;
FIG. 9 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the present application;
FIG. 10 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the invention;
FIG. 11 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the invention;
FIG. 12 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the invention;
FIG. 13 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the invention;
FIG. 14 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the invention;
FIG. 15 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the present invention;
FIG. 16 is a schematic diagram of yet another alternative method of determining a path of movement according to an embodiment of the invention;
FIG. 17 is a flow chart of yet another alternative method of determining a path of movement according to an embodiment of the present invention;
FIG. 18 is a schematic diagram of an alternative movement path determination device according to an embodiment of the present invention;
fig. 19 is a schematic structural view of an alternative electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The terms used in the present application will be described below:
AI: a representation of NPC with personified behavior in the game;
the road-finding navigation system is used in game engines such as RecastNavigation, unreal and the like;
navigation grid: the recastNavigation is based on navigation data generated by the 3D game world, and automatic route searching can be realized based on navmesh;
Tile: the navmesh divides the 3D world into square blocks, and each block is a Tile;
the scheme provided by the embodiment of the application relates to the technologies such as computer vision technology of artificial intelligence, and the like, and is specifically described by the following embodiments:
according to an aspect of the embodiment of the present application, there is provided a method for determining a moving path, optionally, as an alternative implementation, the method for determining a moving path may be applied, but not limited to, in the environment shown in fig. 1. Which may include, but is not limited to, a user device 102 and a server 112, which may include, but is not limited to, a display 104, a processor 106, and a memory 108, the server 112 including a database 114 and a processing engine 116.
The specific process comprises the following steps:
step S102, the user equipment 102 obtains a first moving path and barrier information from a target virtual object in a target virtual scene;
step S104-S106, the barrier information corresponding to the virtual barrier object in the client corresponding to the target virtual scene identified by the unique object is sent to the server 112 through the network 110;
step S108, the server 112 determines boundary points of the target virtual obstacle object between the current position point and the reference road point from the obstacle information through the processing engine;
Step S110, the server 112 selects a target obstacle avoidance point based on the boundary point through the processing engine;
in steps S112-S114, the target obstacle avoidance point information is sent to the user equipment 102 through the network 110, the user equipment 102 updates the first movement path with the target obstacle avoidance point through the processor 106, obtains a second movement path, displays the second movement path in the display 104, and further stores the path information and the target obstacle avoidance point in the memory 108.
In addition to the example shown in fig. 1, the above steps may be performed by the client or the server independently, or by the client and the server cooperatively, such as by the user equipment 102 performing the above step S108, etc., to thereby relieve the processing pressure of the server 112. The user device 102 includes, but is not limited to, a handheld device (e.g., a mobile phone), a notebook computer, a desktop computer, a vehicle-mounted device, etc., and the application is not limited to a particular implementation of the user device 102.
Optionally, as an optional implementation manner, as shown in fig. 2, the method for determining the moving path includes:
s202, determining a first moving path of a target virtual object in a target virtual scene from a current position point to a target position point, wherein at least one road point is included between the current position point and the target position point on the first moving path;
S204, obtaining respective corresponding obstacle information of virtual obstacle objects within a target interval range of a target threshold distance from the current locus, wherein each virtual obstacle object is configured with a unique object identifier in a target virtual scene;
s206, under the condition that a target virtual obstacle object exists between the current position and a reference road point according to the obstacle information, selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object, wherein the target virtual object is avoided when moving to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position when the target virtual object moves along a first moving path;
s208, updating the first moving path by using the target obstacle avoidance point to obtain a second moving path.
Alternatively, in this embodiment, the above-mentioned method for determining the moving path may be applied to, but not limited to, a 3D open world game, where the embodiment may be applied to automatic addressing based on the player-isolated service side AI, for example, where the player-a-controlled virtual character places an obstacle, and only affects the player-a-controlled virtual character, but not the player-B AI, where the player-B AI may be, but not limited to, an NPC in the game world including the player-B, or may refer to the player-B-controlled virtual object, or may be applied to automatic addressing of the player-controlled virtual object. For example, the virtual character controlled by player a is performing an automatic addressing task, or in an automatic on-hook state generated by player a due to network fluctuations, the virtual character controlled by player a is automatically routed by the system.
Optionally, in this embodiment, in step S202, the target virtual object of the target virtual scene may be, but not limited to, a virtual object controlled by artificial intelligence, and may be, but not limited to, a virtual object controlled by NPC artificial intelligence, for example, a virtual object directly controlled by a non-player that needs to guide the user to perform task guidance in an open game, and needs to perform route movement under a specific task, and the virtual object controlled by artificial intelligence is determined as the target virtual object; the target virtual object may also be a virtual object directly controlled by a user, for example, when the virtual object controlled by the user performs an automatic path-finding mode or the virtual object controlled by the user is in a state of being on-hook, the user does not need to perform movement when operating the virtual object to move from the current position to other positions, and the virtual object is controlled by artificial intelligence to perform selection of a moving path.
Alternatively, in the present embodiment, there may be a plurality of virtual game scenes, and the target virtual scene may be understood as a specific virtual scene among the plurality of virtual game scenes, but is not limited thereto. The target virtual object may be understood as a specific virtual object among a plurality of virtual objects, but is not limited to.
Optionally, in this embodiment, the first moving path may be understood as a path primarily determined according to a navigation grid, which may be understood as a walking surface, for navigating and seeking in a complex space, dividing the game map into a plurality of convex polygon grids, marking a walkable polygon data structure, and the navigation grid may be used to identify the topography of the site, and may also be used to identify what walking mode should be taken by the user at the current site, such as walking, driving, climbing, swimming, and the like.
It should be noted that, the first moving path is selected only considering the influence of the topography factor of the original map, further, for example, as shown in fig. 3, the target virtual object 302 needs to move from the site a to the site B, and when the site a moves to the site B, the road section is displayed in the original map to need to pass through a hillside, the user needs to bypass the hillside, and the user can smoothly reach through the flat site C, and at this time, the first moving path 304 is selected only considering the hillside factor in the original game map, and controls the movement of the target virtual object from a-C-B.
The following describes the location and waypoint in the above step S202:
The above-described loci may be used to indicate the location of the target virtual object in the target virtual scene. Alternatively, a locus may be, but is not limited to being, understood to be a specific location information point where a target virtual object is located, and may be, but is not limited to being, represented in terms of coordinates, the location of the occupied navigation grid, the direction in the map, and so on.
The current site and the target site in step S202 are explained as follows: the current location may be understood, but is not limited to, as the specific location of the target virtual object at the current time in the virtual scene, the target location may be understood, but is not limited to, as the specific location of the destination to which the target virtual object is to reach, further illustrating that the target virtual object needs to be moved from the flat location in the upper left corner of the map to the house location in the lower right corner, the coordinates of the flat location where the target virtual object is currently located being (x 1, y1, z 1) for representing the current location of the target virtual object, and the coordinates of the house location in the lower right corner being (x 2, y2, z 2) for representing the target location of the target virtual object.
The waypoint may be a location for indicating a next moving target during the moving of the target virtual object. In the above embodiment, the first moving path may include at least one waypoint, and the control target virtual object may move according to a waypoint sequence formed by the at least one waypoint, so as to move to the target site along the first moving path.
For example, in the first movement path, the sequence: the method includes the steps that a target virtual object is indicated to move to a position where a road point B is located according to a straight line pointing to the road point B at the current position A, a road point B and a target position C according to a straight line pointing to the target position C at the road point B, and therefore the target virtual object is controlled to move according to a first moving path.
As another example, in the second movement path the sequence is included: the control target virtual object is indicated to move to the position where the waypoint B is located according to the straight line of the waypoint A pointing to the waypoint B, then move to the position where the waypoint C is located according to the straight line of the waypoint B pointing to the waypoint C, and finally move to the position where the target site D is located according to the straight line of the waypoint C pointing to the target site D, so that the control target virtual object moves according to the second moving path.
Optionally, the waypoints can be understood as points placed in a movable area according to the requirement on the map, the selection of the waypoints can be selected by manual marking or a computer according to navigation polygons, the ID marking can be performed on each navigation polygon in advance, the midpoint of the side line of each polygon is marked as the waypoint based on the ID, and then the target waypoint is determined based on the shortest distance between the current locus and the target locus; in addition, the selection of the waypoints can be selected based on the distance between the current locus and the target locus, the obtained distance between the current locus and the target locus is set as a first distance, and the waypoints are further determined according to the number of navigation grids occupied by the distance.
Further illustrative, for example, as shown in fig. 4, a schematic route is shown, where the starting point is the current location 402 of the target virtual object, the midpoint is the target location 404 that the target virtual object needs to reach, and the sides of the route passing through the polygon of the navigation grid are determined route points.
Alternatively, in the present embodiment, in step S204, the obtaining of the obstacle information corresponding to each of the virtual obstacle objects within the target interval range from the current locus by the target threshold distance may be understood as: the target threshold distance may be determined according to a moving speed of the target virtual object and a map of the target virtual scene, or by manual labeling, the target interval range may be determined according to size information of a navigation grid and a movable range of the target virtual scene, or may be understood as a preset target threshold distance being a certain value, when the target virtual object is detected to move to a certain position, if the distance between the position and the virtual obstacle object is equal to or smaller than the target threshold distance, obstacle information of the virtual obstacle object in the target interval range of the current position from the target threshold distance is obtained, or may be determined by batch obtaining virtual obstacle objects of the target virtual object from the target threshold distance when the user just completes game loading, and then further screening according to the target interval range to obtain the screened virtual obstacle object and further obtain obstacle information of the virtual obstacle object in the target scene.
Alternatively, in the present embodiment, the obstacle information corresponding to the virtual obstacle object may include, but is not limited to: the size, attribute, etc. of the virtual obstacle object may be, but not limited to, after obtaining the obstacle attribute information corresponding to the virtual obstacle object, determining the attribute information corresponding to the target virtual object to obtain a determination result, and be used to instruct the target virtual object to perform obstacle avoidance of the moving path.
It should be noted that, the virtual obstacle object is configured with a unique object identifier in the target virtual scene, for example, in the virtual scene where the user account a logs in, different unique object identifiers are configured for different obstacles added by the user a in the virtual scene, where the object identifiers may, but are not limited to, be identified according to factors such as a category of the virtual obstacle object, a size of the virtual obstacle object, a time of adding the virtual obstacle, and the like, so as to improve diversity of identification means.
For further illustration, for example, as shown in fig. 5, the target virtual object 502 needs to move from the current location a to the target location B, where the route S1 is the first moving path 504, where the first moving path 504 is a path that is primarily determined according to the navigation grid, where no newly added obstacle is recorded on the path, further, it is detected that the virtual obstacle object 506 exists in the target interval range of the current location of the virtual obstacle object 506 from the target threshold distance, and the object identifier of the configuration of the virtual obstacle object 506 in the target virtual scene is obtained.
Optionally, in this embodiment, in steps S206 to S208, the condition that the target virtual obstacle object exists between the current location and the reference road point according to the obstacle information may, but is not limited to, determining whether the obstacle avoidance condition is met based on the size unit of the obstacle information and the attribute of the obstacle information, further, for example, the virtual obstacle object is a wall with a height of 1 meter, and the virtual obstacle object may be directly passed only if the current target virtual object height needs to reach the preset threshold, or may not be directly passed, and is determined as the target virtual obstacle object, and the current virtual obstacle object may not reach the preset threshold and may not be directly passed. Optionally, in this embodiment, selecting the target obstacle avoidance point based on the boundary point matched by the target virtual obstacle object may, but is not limited to, including two-dimensional and three-dimensional aspects, and the manner of selecting at the two-dimensional level may, but is not limited to, be: the target obstacle avoidance point needs to further acquire a projection image of the target virtual obstacle object in the target virtual scene, and then determines the target obstacle avoidance point by utilizing modes such as a tangent point, a boundary point, a vertex and the like of the projection image, and it is noted that the scene in the embodiment is a scene of the 3D game open world, and the three-dimensional space is considered to require higher calculation cost when the obstacle avoidance point is confirmed, so that the three-dimensional space is converted into two-dimensional projection by utilizing the projection mode, the calculation cost of the target obstacle avoidance point is reduced, and the confirmation efficiency of the target obstacle avoidance point is improved.
Optionally, in this embodiment, the manner of selecting the target obstacle avoidance point on the three-dimensional level may, but is not limited to, using a navigation grid, labeling different IDs of different convex polygons in the navigation grid, further acquiring vertices and emphasis points, acquiring a navigation grid passing through the current location and the target location, and determining the target obstacle avoidance point based on the passing grid, the vertices and emphasis points of each convex polygon.
Optionally, in this embodiment, in step S208, updating the first moving path with the target obstacle avoidance point, and obtaining the second moving path may include one of the following: adding a target obstacle avoidance point between the current locus and the reference road point in the first road point sequence, and determining a moving path indicated by a second road point sequence obtained after adding the target obstacle avoidance point as the second moving path; or, replacing the reference waypoint in the first waypoint sequence by using the target obstacle avoidance point, and determining a moving path indicated by the reference waypoint sequence obtained after the replacement of the reference waypoint as the second moving path; the first waypoint sequence is used for indicating the first moving path.
Optionally, in this embodiment, the second moving path is a second moving path updated by using the target obstacle avoidance point, where the relationship between the first moving path and the second moving path may be different paths or the same path.
For further illustration, for example, as shown in fig. 6, a first moving path 606 of the current location of the target virtual object 602 moving to the target location is obtained, obstacle information of the virtual obstacle object 604 is obtained, where the obstacle information is determined, in the target virtual scene, based on the added account information and the size configuration object identifier of the virtual obstacle object 604 as WJ01-small, that the virtual obstacle object 604 corresponding to the obstacle information is a target virtual obstacle object between the current location and the reference road point, a target obstacle avoidance point 606 is selected, and the first moving path 606 is updated to a second moving path 608 by further using the target obstacle avoidance point 610.
It should be noted that, in the existing obstacle avoidance scheme, when a new obstacle is added, the newly added obstacle and the original navigation map are regarded as a whole, so that when a large number of obstacles need to be added, a large amount of memory and calculation are continuously consumed.
Optionally, in this embodiment, the technical means of selecting the target obstacle avoidance point based on the boundary point matched with the target virtual obstacle object may be, but not limited to, a process based on repeated correction, it should be noted that, in this embodiment, the target virtual object is a dynamic moving process, and in the moving process, the target virtual object is a continuously adjusting and redetermining process because of the topography of the obstacle and the map, for example, as shown in fig. 6, the current position of the target virtual object 602 is the point a, the final target position of the target virtual object is the point B, the first moving path is determined based on the original map of the navigation grid as 606, but the path passes through the virtual obstacle object 604, and it is determined based on the obstacle information of the virtual obstacle object 604 and the attribute information of the current target virtual object 602, and the target virtual object cannot directly pass through the virtual obstacle object 604, and then the virtual obstacle object 604 is determined as the target virtual obstacle object.
By using the idea of reducing dimensions, a floor projection image corresponding to a target virtual object in a 3-dimensional stereoscopic scene is determined, so as to convert the stereoscopic target virtual object into a planar 2-dimensional image, for example, as shown in fig. 7, and the vertices of the floor projection image are obtained based on the shape and the size of the floor projection image, wherein the vertices are D1, D2 and D3, in order to ensure that the virtual object can bypass the target virtual obstacle object, D1 and D2 are further determined as boundary points matched with the target virtual obstacle object, the distance values of a-D1-B and a-D2-B are calculated respectively, and the minimum distance value is selected to determine the optimal path, namely a-D2-B.
For example, as shown in fig. 8, the point D2 is located at the cliff after being restored to the 3-dimensional virtual game scene, if the target virtual object directly adopts the a-D2-B route, the route is interrupted, and the situation that the destination cannot be reached is caused, so that reselection is required, and an iteration is performed, and the point D1 is selected as the target obstacle avoidance point.
Further, for example, as shown in fig. 9, after the D1 point is determined as the first target obstacle avoidance point, since the obstacle is not a regular graph, and when the obstacle is restored to the 3-dimensional route, the thickness and the height of the 3-dimensional stereoscopic image still need to be considered, and after the 3-dimensional route is restored by the 2D, it is necessary to perform a second iteration again, replace the a point with the current D1 point, determine the D1 point as the current point of the target virtual object, perform a second target obstacle avoidance point again, determine the second target obstacle avoidance point again, and after performing the second iteration, it is still possible that the route still needs to be updated due to the thickness and the height of the target virtual object and the target obstacle object, and so on, the consideration of the N iterations is that the target virtual object is dynamically moved, and the core means for determining the target obstacle avoidance point in this embodiment is to reduce the dimension of the three-dimensional image which is difficult to calculate accurately, and convert the three-dimensional image into the calculation of the two-dimensional image, thereby achieving the technical purpose of accurately determining the target obstacle avoidance point, but since the target virtual object and the virtual obstacle object have a certain thickness in the three-dimensional image, the target virtual object and the obstacle avoidance point have a certain thickness, which is difficult to implement the purpose of accurately, and the object can not be realized by implementing the two-dimensional iteration.
According to the embodiment provided by the application, a first moving path of a target virtual object in a target virtual scene from a current locus to a target locus is determined, wherein at least one road point is included between the current locus and the target locus on the first moving path; obtaining barrier information corresponding to each virtual barrier object in a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual barrier object is configured with a unique object identifier in a target virtual scene; under the condition that a target virtual obstacle object exists between a current position point and a reference road point according to obstacle information, selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object, wherein the target virtual object avoids the target virtual obstacle object when moving to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position point when the target virtual object moves along a first moving path; in the method, the second moving path is obtained by updating the first moving path by using the target obstacle avoidance point, and the target obstacle avoidance point is selected based on the obstacle information by acquiring the obstacle information corresponding to the virtual obstacle object, so that the moving path of the target virtual object controlled by artificial intelligence is quickly corrected by only determining the target obstacle avoidance point without reconstruction under the condition of newly adding the obstacle object, and the technical effect of improving the efficiency of determining the moving path is realized.
As an alternative, before determining the first movement path of the target virtual object in the target virtual scene from the current locus to the target locus, the method further includes:
s1, obtaining obstacle object adding information configured for a target virtual scene, wherein the obstacle object adding information is a virtual obstacle object used for controlling a target account corresponding to the target virtual object to be added in the target virtual scene;
s2, adding a virtual obstacle object in the target virtual scene according to the obstacle object adding information.
Optionally, in this embodiment, before determining the first moving path, it is required to obtain obstacle object adding information configured by the target virtual scene, where the obstacle object adding information is a virtual obstacle object added by the account corresponding to the control target virtual object in the target virtual scene, so that the purpose of isolating obstacles of different players is achieved by using the uniqueness and the specificity of the player account information.
Optionally, in this embodiment, the obstacle object adding information may include, but is not limited to, attribute, size, ID, location information, and the like of the obstacle object, where the attribute information further illustrates the passing degree of the obstacle object, and it should be noted that the obstacle object is not directly passed through at all, for example, the target obstacle object is a river, and the river is a target virtual object with swimming skills and can be directly passed through, and is regarded as a non-obstacle object, and the target virtual object without swimming skills can not be directly passed through and is regarded as a target obstacle object.
Alternatively, in the present embodiment, the size information is further exemplified, which may be understood as, but is not limited to, the size, length, width, thickness, etc. of the obstacle; the ID information may further include, but is not limited to, account numbers of players, or identity information of players, target virtual scene information, etc., and it is to be noted that account numbers of the same player may further set a scene ID under different target virtual scenes of the game, and the virtual obstacle object may be added based on the player identity ID and the scene ID, so as to achieve the technical effect of improving flexibility of labeling the virtual obstacle object.
According to the embodiment provided by the application, the obstacle object adding information configured for the target virtual scene is obtained, wherein the obstacle object adding information is a virtual obstacle object used for controlling a target account corresponding to the target virtual object to be added in the target virtual scene; according to the obstacle object adding information, virtual obstacle objects are added in the target virtual scene, the purpose of obstacle isolation among players is achieved, for example, as shown in fig. 10, the added obstacle of the player a has no influence on the AI of the player B and the player B, and the technical effect of improving the accuracy of the obstacle isolation is achieved.
As an alternative, the obtaining the obstacle object adding information configured for the target virtual scene includes:
s1, acquiring an object identification set of a virtual obstacle object to be configured in a target virtual scene of a target account;
s2, obtaining obstacle object adding information which is respectively matched with each object identifier contained in the object identifier set from a virtual obstacle object list, wherein the virtual obstacle object list comprises a plurality of pieces of obstacle object adding information which are respectively configured in a target virtual scene by each account.
Alternatively, in this embodiment, the object identifier set may be, but is not limited to, a set of all obstacle IDs under the target virtual scene, and the object identifier may be, but is not limited to, formed by a scene ID, a player ID, an attribute of an obstacle, a size degree of an obstacle, position information of an obstacle object, or the like, where the object identifier set may be, but is not limited to, all of which are recorded in a virtual obstacle object list, or one or more of which may be selected to be recorded in the virtual obstacle list, and it is to be noted that the virtual obstacle object list includes addition information of a plurality of obstacle objects each configured by an account in the target virtual scene.
Further by way of example, the object identification set is { R02-WJ 0004-cannot pass-large- (X, Y, Z) }, wherein the first piece of added information is a scene ID, the second piece of added information is a player ID, the third piece of added information is attribute information, the fourth piece of added information is a size degree, and the fifth piece of added information is a position coordinate of an obstacle object. By adding and storing the multidimensional information in the form of an identification set, the technical effect of improving the convenience of the virtual obstacle object information acquisition is achieved.
According to the embodiment provided by the application, the object identification set of the virtual obstacle object to be configured in the target virtual scene of the target account is obtained; the method comprises the steps of obtaining obstacle object adding information which is matched with each object identifier contained in an object identifier set from a virtual obstacle object list, wherein the virtual obstacle object list comprises a plurality of pieces of obstacle object adding information which are configured in a target virtual scene by each account, and therefore the technical effect of storing the information of the virtual obstacle object in the virtual object identifier set according to the obstacle ID is achieved, and the technical effect of improving the storage efficiency of the information of the virtual obstacle object is achieved.
As an alternative, adding a virtual obstacle object in the target virtual scene according to the obstacle object addition information includes:
S1, determining adding position information and adding size information of a virtual obstacle object to be added according to obstacle object adding information;
and S2, adding the virtual obstacle object into the target virtual scene based on the adding position information and the adding size information of the virtual obstacle object.
Alternatively, in the present embodiment, the adding position information may include, but is not limited to, specific coordinate information of the virtual obstacle object, or include a position type, such as a position type under water, on a hillside, in the air, or the like, and the adding size information may include, but is not limited to, specific length, width, height, thickness, area of the occupied navigation grid, number, or the like, and further illustrates that adding virtual obstacle object adding information in advance includes adding obstacle boxes at grassland coordinates (40, 50, 60) that occupy 4 navigation grids.
According to the embodiment provided by the application, the adding position information and the adding size information of the virtual obstacle object to be added are determined according to the obstacle object adding information; based on the adding position information and the adding size information of the virtual obstacle object, the virtual obstacle object is added into the target virtual scene, so that the technical effect of improving the diversity of the virtual obstacle object information addition is achieved.
As an alternative, after adding the virtual obstacle object in the target virtual scene according to the obstacle object adding information, the method further includes:
s1, determining a reference object identifier of a reference virtual obstacle object to be removed in a target virtual scene;
s2, removing the reference virtual obstacle object corresponding to the reference object identifier from the target virtual scene.
Optionally, in this embodiment, in a case where the user needs to remove the obstacle object, first, a reference object identifier of a reference virtual obstacle object to be removed in the target virtual scene is determined, where the reference object identifier is information determined by a unique player account number.
When the player adds the obstacle, the present embodiment records the position and size information of the obstacle, and realizes the isolated storage of the obstacle in units of player. The conventional types of the obstacles comprise square obstacles and cylindrical obstacles, and the cylindrical obstacles are taken as an example, the central coordinates of the bottom surfaces of the cylinders, the circle radius of the bottom surfaces and the height information of the cylinders are recorded, so that compared with the existing scheme for reconstructing the navigation grid, the memory of the server is greatly saved. After the obstacle is added to the game world, the unique obstacle ID is given, and when the player removes the obstacle, the player only needs to index based on the ID, so that quick removal is realized.
By the embodiment provided by the application, the reference object identification of the reference virtual obstacle object to be removed in the target virtual scene is determined; the reference virtual obstacle object corresponding to the reference object identifier is removed from the target virtual scene, and the virtual obstacle object is removed by utilizing the reference object identifier, so that the technical effect of improving the convenience of removing the virtual obstacle object is achieved.
As an alternative, selecting the target obstacle avoidance point based on the boundary point matched with the target virtual obstacle object includes:
s1, determining a landing projection graph of a target virtual obstacle object in a target virtual scene;
s2, determining boundary points matched with the target virtual obstacle object from all vertexes of the floor projection graph;
s3, selecting a target obstacle avoidance point according to the position relation between the current locus and the boundary point.
Optionally, in this embodiment, the floor projection pattern may be, but is not limited to, be understood as projecting the three-dimensional object virtual obstacle object on the two-dimensional plane, determining the image after the projection is determined as the floor projection pattern, further obtaining each vertex of the floor projection pattern, where, according to different shapes of the floor projection pattern after the projection of the object virtual obstacle object, the manner of determining the vertices is different, if the floor projection pattern is formed by arcs, then a tangent line based on the maximum angle of the horizontal plane needs to be made from the current location, and the point where the arc line intersects the tangent line is determined as the vertex of the floor projection pattern, for example, a circle, an ellipse, a sector, and so on; if the boundary line composing the floor projection graph is composed of straight lines, the end point where the straight lines intersect is directly obtained and determined as the vertex.
Alternatively, in this embodiment, considering that the floor projection pattern is a pattern that converts a three-dimensional space pattern into a two-dimensional space pattern, and that the target virtual obstacle object and the target virtual object have a certain thickness in the three-dimensional space, in order to ensure the importance of determining the boundary point, it is necessary to add a boundary, for example, as shown in fig. 11, where the actual obstacle is a circle determined with d as a radius, but when the three-dimensional space is restored to a specific route, it is necessary to add a certain boundary, and the target obstacle object after adding the boundary is a collision detection obstacle.
According to the embodiment provided by the application, the landing projection graph of the target virtual obstacle object in the target virtual scene is determined; determining boundary points matched with the target virtual barrier object from all vertexes of the floor projection graph; and selecting a target obstacle avoidance point according to the position relation between the current locus and the boundary point, thereby achieving the technical effect of improving the accuracy of boundary point determination.
As an alternative, selecting the target obstacle avoidance point according to the positional relationship between the current locus and the boundary point includes:
s1, sequentially selecting one boundary point from boundary points matched with a target virtual obstacle object as a current boundary point, and executing the following steps until the boundary point is traversed: acquiring a first distance between a current locus and a current boundary point and a second distance between the current boundary point and a target locus, and taking the sum of the distances of the first distance and the second distance as an expected distance corresponding to the current boundary point;
S2, determining a boundary point with the shortest expected distance from the boundary points as a target boundary point;
and S3, determining a target obstacle avoidance point in a candidate traffic area comprising a target boundary point, wherein the candidate traffic area is a map area which does not comprise a landing projection graph.
Alternatively, in this embodiment, for example, as shown in fig. 6, the current location of the target virtual object 602 is point a, the target location of the target virtual object 602 is point B, the path is based on the first moving path 606 determined in the original map of the navigation grid, the ground projection graph of the target virtual object 604 in the target virtual scene is determined, for example, as shown in fig. 7, the ground projection graph has three vertices D1, D2 and D3, since only the point D1 and the point D2 can be reached from the current location, the point D1 and the point D2 are determined as boundary points, the first distance between the current location and the point D1 is obtained, the second distance between the point D1 and the target location is further obtained, and the sum of the first distance and the second local distance is determined as the desired distance.
For example, as shown in fig. 7, the expected distance value of the D2 point is smaller than the expected distance value of the D1 point, the D2 point is determined as a candidate target boundary point, the target boundary point is restored to the three-dimensional space, the restored image is, for example, as shown in fig. 8, a candidate passing area in the three-dimensional space is obtained, since the D2 point is located at the cliff, if the path is executed, the target site cannot be reached, so that the D2 point is an unvented point in the three-dimensional scene, the D2 point is deleted from the candidate target boundary point, and the selection of the boundary point is performed again in the two-dimensional space.
For example, as shown in fig. 12, D1 is selected again as a candidate boundary point, the path of the a-D1-B point is restored to the three-dimensional space, and since D1 is located in the candidate traffic area, the a point is determined as a target boundary point, and the route passing through the a target boundary point is restored to the three-dimensional space.
For example, as shown in fig. 13, after the three-dimensional space is restored, under the condition that the route passed by the a-D1-B still encounters an obstacle due to factors such as thickness and the like, D1 is replaced by a current position, the selection of the target boundary point is performed by reselecting, the route between the D1 point and the B point is determined as a first moving path S2, the route is still influenced by a virtual obstacle object when the route is restored to a 3-dimensional image, a ground projection pattern of the target virtual object E in the target virtual scene is further determined, an F1 vertex is further determined, a first distance between the current position D1 and the F1 point is acquired, a second distance between the F1 point and the target position is further acquired, the sum of the first distance and the second local distance is determined as a desired distance, the shortest F1 point is determined as the target boundary point, and the target obstacle avoidance point is determined as F1 based on the candidate passing area, so that the finally updated route is D1-F1-B, for example, as shown in fig. 13.
It should be noted that, a plurality of target obstacle avoidance points may be set, that is, if the route traversed by a-D1-B still encounters an obstacle, the point a is reserved, for example, as shown in fig. 14, on the basis that the point D1 is the first target obstacle avoidance point, the second target obstacle avoidance point on the path D1 to B is determined, in the manner described above, the second target obstacle avoidance point is determined to be F2, and the route updated finally is a-D1-F1-B.
After the three-dimensional space is restored, the D1 point has the condition of no trafficability, so that the method is capable of re-selecting boundary points in the preset range of the D1 point, drawing a circular area by taking the D1 point as a circle center and presetting a certain distance as a radius, respectively traversing other points in the area in the process of searching the points, and determining the trafficable points in the process of restoring the three-dimensional space in the process of traversing the other points as target obstacle avoidance points.
According to the embodiment provided by the application, under the condition that the reference obstacle object exists in the reference interval range which is separated from the current locus by the reference threshold distance, the landing projection graph of the target virtual obstacle object and the landing projection graph of the reference obstacle object in the target virtual scene are determined; determining boundary points matched with the target virtual obstacle object and reference boundary points matched with the reference obstacle object from all vertexes of the floor projection graph; determining a target boundary point from a boundary point set formed by the boundary point and a reference boundary point; and determining the target obstacle avoidance point in the candidate passing area comprising the target boundary point, wherein the candidate passing area is a map area which does not comprise a landing projection graph, thereby realizing the technical effect of improving the accuracy of boundary point determination.
Alternatively, in this embodiment, the candidate passing area may be, but is not limited to, an area that does not include an obstacle, and it should be noted that, in this embodiment, a three-dimensional stereo image is determined as a two-dimensional image by using a projection manner when determining the target obstacle avoidance point, and the target boundary point is determined in the two-dimensional image, but the topography is not required to be considered in the two-dimensional image, so when restoring to the three-dimensional stereo image, there is a situation that the target boundary point does not belong to the candidate passing area, and this embodiment achieves the technical effect of improving the accuracy of determining the target obstacle avoidance point by determining the candidate passing area.
As an alternative, selecting the target obstacle avoidance point based on the boundary point matched with the target virtual obstacle object includes:
s1, under the condition that a reference obstacle object exists in a reference interval range which is separated from a current locus by a reference threshold distance, determining a landing projection graph of each of a target virtual obstacle object and the reference obstacle object in a target virtual scene;
s2, determining boundary points matched with the target virtual obstacle object and reference boundary points matched with the reference obstacle object from all vertexes of the floor projection graph;
S3, determining a target boundary point from a boundary point set formed by the boundary point and the reference boundary point;
and S4, determining a target obstacle avoidance point in a candidate traffic area comprising the target boundary point, wherein the candidate traffic area is a map area which does not comprise a landing projection graph.
Alternatively, in this embodiment, it may be understood, but not limited to, that in the case where there are a plurality of reference obstacle objects when the target obstacle avoidance point is selected, the landing projection images of the target virtual obstacle object and the reference obstacle object are determined, each vertex of the landing projection image is acquired, the reference boundary point matched with the target virtual obstacle object is determined, the target boundary point is determined from the boundary point set formed by the boundary point and the reference boundary point, and the target obstacle avoidance point is determined based on the candidate passing area.
As an alternative, determining the target boundary point from the boundary point set consisting of the boundary point and the reference boundary point includes:
s1, sequentially selecting one boundary point from the boundary point set as a current boundary point, and executing the following steps until the boundary point set is traversed: acquiring a first distance between a current locus and a current boundary point and a second distance between the current boundary point and a target locus, and taking the sum of the distances of the first distance and the second distance as an expected distance corresponding to the current boundary point;
S2, determining the boundary point with the shortest expected distance from the boundary point set as the target boundary point.
When the AI movement encounters an obstacle, the obstacle avoidance point needs to be reasonably selected based on the AI current position and the obstacle information, and since there is a possibility that a player adds a plurality of obstacles, if only the obstacle that collides is considered, a situation may occur in which the AI is blocked by the obstacle B when moving to the obstacle avoidance point of the obstacle a. To optimize this problem, the present embodiment first screens out all the obstacles within a certain distance, and makes a more comprehensive obstacle avoidance point selection based on the obstacle information.
Further by way of example, AI obstacle avoidance movement in the 3D game world is a three-dimensional problem, and in order to simplify the calculation, a suitable avoidance point is first selected on the XY plane, and then the Z-direction correction is performed. When the AI detects that there is an obstacle ahead in the movement, as shown in fig. 15, in the path selection process from the current position point to the initial target point, all the obstacles within the specified range are first selected based on the obstacle avoidance detection distance. For each obstacle within the distance, the present embodiment calculates the avoidance position from the current position to both sides of the obstacle. The obstacle avoidance position is easy to calculate, for the box-shaped obstacle, only vectors of which the current position points to four endpoints of the rectangle are required to be calculated, and then endpoints corresponding to a group of vectors with the largest included angle are calculated to be two-side avoidance points, for example, the point A in fig. 15; for a cylindrical obstacle, only two tangent points on the circle of the plane from the current position need be calculated, for example, as shown by a point B in fig. 15, to avoid the end points for the two sides selected.
By the embodiment of the application, the technical effect of accuracy of determining the target boundary point is improved.
As an alternative, determining the target obstacle avoidance point in the candidate traffic zone including the target boundary point includes:
under the condition that the corresponding space locus in the target virtual scene is a passable locus, the target boundary point is determined to be a target obstacle avoidance point;
under the condition that the corresponding space locus in the target virtual scene is an unperforable locus, a first connecting line between the current locus and a target boundary point and a second connecting line between the current locus and a reference boundary point are obtained, wherein the reference boundary point is a boundary point adjacent to the target boundary point in the candidate passing area;
determining a target angle and a reference angle between the first connecting line and the second connecting line, a first distance of the first connecting line and a second distance of the second connecting line, and determining an angle ratio between the reference angle and the target angle;
obtaining a first product of a second distance and an angle ratio and a second product of the first distance and a reference angle ratio, and determining the sum of the first product and the second product as a third distance, wherein the sum of the reference angle ratio and the angle ratio is 1;
Determining the target boundary point as a historical obstacle avoidance point, and repeating the following steps until the target obstacle avoidance point is determined:
determining a position which is at a third distance from the current position as a candidate obstacle avoidance point in the direction that a reference connecting line between the current position and the historical obstacle avoidance point deviates from a reference boundary point by a reference angle;
under the condition that the corresponding space locus in the target virtual scene is a passable locus, the candidate obstacle avoidance point is determined to be the target obstacle avoidance point;
and under the condition that the corresponding spatial locus in the target virtual scene is an impassable locus, determining the candidate obstacle avoidance point as a historical obstacle avoidance point.
Optionally, in this embodiment, further illustrating, after calculating the avoidance points on both sides of each obstacle, the embodiment uses two adjacent obstacles as a group, and uses a sector area formed by connecting the adjacent avoidance points and the current position as an alternative range of the avoidance points, such as a yellow sector area surrounded by points a and B in fig. 15. When a plurality of obstacles exist, a plurality of alternative sector areas are formed and are determined as obstacle avoidance and point selection areas, and in the embodiment, the expected distances corresponding to the end points at two sides of each sector area are calculated first, and the point selection is preferably started from the area with the smallest expected distance. The desired distance is equal to the distance d1 from the current position to the end point position plus the distance d2 from the end point position to the initial target point, taking fig. 15 as an example, the selection of points from the area enclosed by the points a and B will be performed preferentially.
After determining the candidate area of the obstacle avoidance point, in this embodiment, interpolation sampling is performed with the endpoint with the smaller expected distance as the starting point and the endpoint on the other side as the ending point. The point selection schematic diagram is shown in fig. 16, interpolation is carried out in a sector area surrounded by a starting point and an end point by taking an angle as an increment during point selection, the obstacle avoidance point rotates from the starting point to the end point for a designated angle each time during interpolation, the angle orientation of the obstacle avoidance point is further determined, and the position of the obstacle avoidance point can be determined by calculating the distance between the obstacle avoidance point and the current position through a formula. If the angle formed by the connecting line of the starting point and the end point and the current position is alpha, the rotation angle of the obstacle avoidance point is beta, and the distances between the starting point and the end point and the current position are d1 and d2 respectively, the calculation formula of the distance d3 between the obstacle avoidance point and the current position is as follows:
scale=β/α
d3=(1-scale)*d1+scale*d2
after the coordinates of the obstacle avoidance points in the XY plane are obtained through calculation, Z-direction correction is carried out on the obstacle avoidance points, the Z coordinates of the obstacle avoidance points are adjusted to the Z coordinates of the top of the obstacle during correction, and the obstacle avoidance points are projected downwards, so that the three-dimensional coordinates in the real game are finally obtained. And then find an reachable path from the current position to the obstacle avoidance point. The embodiment will repeat the interpolation point selection step and continue to take the point in the two-dimensional plane; if the searching is successful, the selection of the obstacle avoidance point and the searching of the road point are completed, and the AI object can move towards the obstacle avoidance point along the searched road point. When the AI object moves to the obstacle avoidance point, the AI object can move from the obstacle avoidance point towards the initial target point again.
According to the embodiment of the application, when the route selection of the obstacle avoidance is performed, the points are selected by repeated interpolation, so that the edge of the obstacle can be closely attached, the technical effect of improving the accuracy of the obstacle avoidance is realized, the phenomenon of winding a round road when the AI moves is avoided, the personification of the AI obstacle avoidance is improved, and the technical effect of improving the experience of a player is always realized.
As an alternative, the method for determining a moving path is applied in a game scene, for example, as shown in fig. 17, and the specific steps are as follows:
step S1702: saving position and size information of the obstacle;
step S1704: generating an obstacle unique ID;
step S1706: judging whether the AI mobile waypoint is blocked by the obstacle, executing step S1710 if the AI mobile waypoint is blocked, and executing step S1708 if the AI mobile waypoint is not blocked;
step S1708: AI regular movement does not perform obstacle detection;
step S1710: screening out all barriers in a specified distance under the condition of being blocked by the barriers;
step S1712: calculating obstacle avoidance endpoints on two sides of each obstacle;
step S1714: calculating a desired distance for each endpoint;
step S1716: interpolation sampling and point selection are started from the obstacle segment with the minimum expected distance;
Step S1718: correcting the avoidance point in the Z-axis direction;
step S1720: searching a walking road point list from the current position to the avoidance point;
step S1722: checking whether the path is walkable, returning to step S1716 if it is not walkable, and continuing to step S1724 if it is walkable;
step S1724: moving to an avoidance point;
step S1726: and after the obstacle avoidance movement is finished, continuing to move to the initial target point. ,
it will be appreciated that in the specific embodiments of the present application, related data such as user information is involved, and when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
According to another aspect of the embodiment of the present application, there is also provided a moving path determining apparatus for implementing the above moving path determining method. As shown in fig. 18, the apparatus includes:
a determining unit 1802 configured to determine a first movement path of a target virtual object in a target virtual scene from a current location to a target location, where at least one waypoint is included between the current location and the target location on the first movement path;
an obtaining unit 1804, configured to obtain obstacle information corresponding to each virtual obstacle object in a target interval range separated from the current locus by a target threshold distance, where each virtual obstacle object is configured with a unique object identifier in a target virtual scene;
a selecting unit 1806, configured to select, when it is determined according to the obstacle information that a target virtual obstacle object exists between the current location and a reference waypoint, a target obstacle avoidance point based on a boundary point that matches the target virtual obstacle object, where the target virtual object will avoid the target virtual obstacle object when moving to the target obstacle avoidance point, and the reference waypoint is a next waypoint adjacent to the current location when the target virtual object moves along the first movement path;
And an updating unit 1808, configured to update the first movement path with the target obstacle avoidance point, to obtain a second movement path.
Specific embodiments may refer to examples shown in the above-mentioned determination device of the moving path, and in this example, details are not repeated here.
According to a further aspect of the embodiments of the present application, there is also provided an electronic device for implementing the above-described method of determining a movement path, as shown in fig. 19, the electronic device comprising a memory 1902 and a processor 1904, the memory 1902 having stored therein a computer program, the processor 1904 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, determining a first moving path of a target virtual object in a target virtual scene from a current site to a target site, wherein at least one road point is included between the current site and the target site on the first moving path;
s2, obtaining respective corresponding obstacle information of virtual obstacle objects within a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual obstacle object is configured with a unique object identifier in a target virtual scene;
S3, under the condition that a target virtual obstacle object exists between the current position and a reference road point according to the obstacle information, selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object, wherein the target virtual object is avoided when moving to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position when the target virtual object moves along a first moving path;
s4, updating the first moving path by using the target obstacle avoidance points to obtain a second moving path.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 19 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 19 is not limited to the structure of the electronic device described above. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 19, or have a different configuration than shown in FIG. 19.
The memory 1902 may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for determining a movement path in the embodiment of the present application, and the processor 1904 executes the software programs and modules stored in the memory 1902, thereby performing various functional applications and data processing, i.e., implementing the method for determining a movement path described above. Memory 1902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1902 may further include memory located remotely from processor 1904, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1902 may be, but is not limited to, used for storing information of a target virtual object, a first moving path, a second moving path, and the like. As an example, as shown in fig. 19, the above memory 1902 may include, but is not limited to, a determination unit 1802, an acquisition unit 1804, a selection unit 1806, and an update unit 1808 in the above-described determination apparatus of a moving path. In addition, other module units in the above-mentioned determination device of the moving path may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1906 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 1906 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 1906 is a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In addition, the electronic device further includes: a display 1908 for displaying information such as the target virtual object, the first movement path, and the second movement path; and a connection bus 1910 for connecting the respective module components in the above-described electronic device.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
According to one aspect of the present application, there is provided a computer program product comprising a computer program/instruction containing program code for executing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. When executed by a central processing unit, performs various functions provided by embodiments of the present application.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that the computer system of the electronic device is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
The computer system includes a central processing unit (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) or a program loaded from a storage section into a random access Memory (Random Access Memory, RAM). In the random access memory, various programs and data required for the system operation are also stored. The CPU, the ROM and the RAM are connected to each other by bus. An Input/Output interface (i.e., I/O interface) is also connected to the bus.
The following components are connected to the input/output interface: an input section including a keyboard, a mouse, etc.; an output section including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage section including a hard disk or the like; and a communication section including a network interface card such as a local area network card, a modem, and the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the input/output interface as needed. Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like are mounted on the drive as needed so that a computer program read therefrom is mounted into the storage section as needed.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. The computer program, when executed by a central processing unit, performs the various functions defined in the system of the application.
According to one aspect of the present application, there is provided a computer-readable storage medium, from which a processor of a computer device reads the computer instructions, the processor executing the computer instructions, causing the computer device to perform the methods provided in the various alternative implementations described above.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, determining a first moving path of a target virtual object in a target virtual scene from a current site to a target site, wherein at least one road point is included between the current site and the target site on the first moving path;
s2, obtaining respective corresponding obstacle information of virtual obstacle objects within a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual obstacle object is configured with a unique object identifier in a target virtual scene;
s3, under the condition that a target virtual obstacle object exists between the current position and a reference road point according to the obstacle information, selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object, wherein the target virtual object is avoided when moving to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position when the target virtual object moves along a first moving path;
S4, updating the first moving path by using the target obstacle avoidance points to obtain a second moving path.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method of the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.
Claims (14)
1. A method of determining a movement path, comprising:
determining a first moving path of a target virtual object in a target virtual scene from a current position point to a target position point, wherein at least one path point is included between the current position point and the target position point on the first moving path;
obtaining barrier information corresponding to each virtual barrier object in a target interval range which is separated from the current locus by a target threshold distance, wherein each virtual barrier object is configured with a unique object identifier in the target virtual scene;
Selecting a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object under the condition that a target virtual obstacle object exists between the current position and a reference road point according to the obstacle information, wherein the target virtual object is avoided when the target virtual object moves to the target obstacle avoidance point, and the reference road point is the next road point adjacent to the current position when the target virtual object moves along the first moving path;
and updating the first moving path by using the target obstacle avoidance point to obtain a second moving path.
2. The method of claim 1, further comprising, prior to the determining the first path of movement of the target virtual object in the target virtual scene from the current location to the target location:
obtaining barrier object adding information configured for the target virtual scene, wherein the barrier object adding information is a virtual barrier object used for controlling a target account corresponding to the target virtual object to be added in the target virtual scene;
and adding the virtual obstacle object in the target virtual scene according to the obstacle object adding information.
3. The method of claim 2, wherein the obtaining obstacle object addition information configured for the target virtual scene comprises:
acquiring an object identification set of a virtual obstacle object to be configured in the target virtual scene of the target account;
and obtaining obstacle object adding information which is respectively matched with each object identifier contained in the object identifier set from a virtual obstacle object list, wherein the virtual obstacle object list comprises a plurality of pieces of obstacle object adding information configured in the target virtual scene by each account.
4. The method of claim 2, wherein the adding the virtual obstacle object in the target virtual scene according to the obstacle object addition information comprises:
determining adding position information and adding size information of the virtual obstacle object to be added according to the obstacle object adding information;
the virtual obstacle object is added to the target virtual scene based on the addition position information and the addition size information of the virtual obstacle object.
5. The method of claim 2, wherein after adding the virtual obstacle object in the target virtual scene according to the obstacle object addition information, further comprising:
Determining a reference object identifier of a reference virtual obstacle object to be removed in the target virtual scene;
and removing the reference virtual obstacle object corresponding to the reference object identifier from the target virtual scene.
6. The method of claim 1, wherein the selecting a target obstacle avoidance point based on boundary points matching the target virtual obstacle object comprises:
determining a landing projection graph of the target virtual obstacle object in the target virtual scene;
determining the boundary point matched with the target virtual obstacle object from all vertexes of the floor projection graph;
and selecting the target obstacle avoidance point according to the position relation between the current locus and the boundary point.
7. The method of claim 6, wherein selecting the target obstacle avoidance point based on the positional relationship between the current location and the boundary point comprises:
sequentially selecting one boundary point from the boundary points matched with the target virtual obstacle object as a current boundary point, and executing the following steps until traversing the boundary point: acquiring a first distance between the current locus and the current boundary point and a second distance between the current boundary point and the target locus, and taking the sum of the distances of the first distance and the second distance as an expected distance corresponding to the current boundary point;
Determining the boundary point with the shortest expected distance from the boundary points as a target boundary point;
and determining the target obstacle avoidance point in a candidate passing area comprising the target boundary point, wherein the candidate passing area is a map area which does not comprise the landing projection graph.
8. The method of claim 1, wherein the selecting a target obstacle avoidance point based on boundary points matching the target virtual obstacle object comprises:
determining a landing projection graph of a target virtual obstacle object and each reference obstacle object in the target virtual scene under the condition that the reference obstacle object exists in a reference interval range which is separated from the current locus by a reference threshold distance;
determining the boundary point matched with the target virtual obstacle object and a reference boundary point matched with the reference obstacle object from all vertexes of the floor projection graph;
determining a target boundary point from a boundary point set formed by the boundary point and the reference boundary point;
and determining the target obstacle avoidance point in a candidate passing area comprising the target boundary point, wherein the candidate passing area is a map area which does not comprise the landing projection graph.
9. The method of claim 8, wherein the determining a target boundary point from a set of boundary points consisting of the boundary point and the reference boundary point comprises:
sequentially selecting one boundary point from the boundary point set as a current boundary point, and executing the following steps until traversing the boundary point set: acquiring a first distance between the current locus and the current boundary point and a second distance between the current boundary point and the target locus, and taking the sum of the distances of the first distance and the second distance as an expected distance corresponding to the current boundary point;
and determining the boundary point with the shortest expected distance from the boundary point set as the target boundary point.
10. The method of claim 8, wherein the determining the target obstacle avoidance point in the candidate traffic zone including the target boundary point comprises:
the target boundary point is determined to be the target obstacle avoidance point under the condition that the corresponding space locus in the target virtual scene is a passable locus;
under the condition that the corresponding spatial locus in the target virtual scene is an unperforable locus, a first connecting line between the current locus and the target boundary point and a second connecting line between the current locus and a reference boundary point are obtained, wherein the reference boundary point is a boundary point adjacent to the target boundary point in the candidate passing area;
Determining a target angle and a reference angle between the first wire and the second wire, and a first distance of the first wire and a second distance of the second wire, and determining an angle ratio between the reference angle and the target angle;
obtaining a first product of the second distance and the angle ratio and a second product of the first distance and a reference angle ratio, and determining a sum of the first product and the second product as a third distance, wherein the sum of the reference angle ratio and the angle ratio is 1;
determining the target boundary point as a historical obstacle avoidance point, and repeating the following steps until the target obstacle avoidance point is determined:
determining a position point which is at a third distance from the current position point as a candidate obstacle avoidance point in the direction that a reference connecting line between the current position point and the historical obstacle avoidance point deviates from the reference boundary point by the reference angle;
under the condition that the corresponding space locus in the target virtual scene is a passable locus, the candidate obstacle avoidance point is determined to be the target obstacle avoidance point;
and under the condition that the corresponding spatial locus in the target virtual scene is an impassable locus, the candidate obstacle avoidance point is determined to be the historical obstacle avoidance point.
11. A moving path determining apparatus, comprising:
the device comprises a determining unit, a first moving unit and a second moving unit, wherein the determining unit is used for determining a first moving path of a target virtual object in a target virtual scene from a current locus to a target locus, and at least one road point is included between the current locus and the target locus on the first moving path;
an obtaining unit, configured to obtain obstacle information corresponding to each virtual obstacle object in a target interval range away from the current locus by a target threshold distance, where each virtual obstacle object is configured with a unique object identifier in the target virtual scene;
a selecting unit, configured to select a target obstacle avoidance point based on a boundary point matched with the target virtual obstacle object when it is determined that a target virtual obstacle object exists between the current location and a reference waypoint according to the obstacle information, where the target virtual object will avoid the target virtual obstacle object when moving to the target obstacle avoidance point, and the reference waypoint is a next waypoint adjacent to the current location when the target virtual object moves along the first movement path;
and the updating unit is used for updating the first moving path by using the target obstacle avoidance point to obtain a second moving path.
12. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 10.
13. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 10.
14. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 10 by means of the computer program.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310121996.0A CN116943206A (en) | 2023-01-31 | 2023-01-31 | Method and device for determining moving path, storage medium and electronic equipment |
PCT/CN2023/131985 WO2024159865A1 (en) | 2023-01-31 | 2023-11-16 | Method and apparatus for determining moving path, medium, device, and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310121996.0A CN116943206A (en) | 2023-01-31 | 2023-01-31 | Method and device for determining moving path, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116943206A true CN116943206A (en) | 2023-10-27 |
Family
ID=88457099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310121996.0A Pending CN116943206A (en) | 2023-01-31 | 2023-01-31 | Method and device for determining moving path, storage medium and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116943206A (en) |
WO (1) | WO2024159865A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024159865A1 (en) * | 2023-01-31 | 2024-08-08 | 腾讯科技(深圳)有限公司 | Method and apparatus for determining moving path, medium, device, and program product |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006130123A (en) * | 2004-11-08 | 2006-05-25 | Sega Corp | Game apparatus, and game program performed in the same |
CN102467751A (en) * | 2010-11-10 | 2012-05-23 | 上海日浦信息技术有限公司 | Rubber band algorithm of three-dimensional virtual scene rapid path planning |
CN109999498A (en) * | 2019-05-16 | 2019-07-12 | 网易(杭州)网络有限公司 | A kind of method for searching and device of virtual objects |
CN111888763B (en) * | 2020-08-26 | 2024-02-02 | 网易(杭州)网络有限公司 | Method and device for generating obstacle in game scene |
CN115601521B (en) * | 2022-12-14 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Path processing method, electronic device, storage medium and program product |
CN116943206A (en) * | 2023-01-31 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Method and device for determining moving path, storage medium and electronic equipment |
-
2023
- 2023-01-31 CN CN202310121996.0A patent/CN116943206A/en active Pending
- 2023-11-16 WO PCT/CN2023/131985 patent/WO2024159865A1/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024159865A1 (en) * | 2023-01-31 | 2024-08-08 | 腾讯科技(深圳)有限公司 | Method and apparatus for determining moving path, medium, device, and program product |
Also Published As
Publication number | Publication date |
---|---|
WO2024159865A1 (en) | 2024-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10139236B2 (en) | Route generation program, route generation method and route generation apparatus | |
CN107198883B (en) | Path finding method and device for game object in virtual game | |
CN111080786B (en) | BIM-based indoor map model construction method and device | |
CN113219993B (en) | Path planning method and cleaning robot | |
CN108241369B (en) | Method and device for avoiding static obstacle for robot | |
CN110909961B (en) | BIM-based indoor path query method and device | |
CN107860387A (en) | The unmanned machine operation flight course planning method of plant protection and plant protection unmanned plane | |
CN112090078B (en) | Game character movement control method, device, equipment and medium | |
WO2024159865A1 (en) | Method and apparatus for determining moving path, medium, device, and program product | |
CN111013146B (en) | Dynamically modifiable way finding navigation method and device for oversized map | |
CN112221144B (en) | Three-dimensional scene path finding method and device and three-dimensional scene map processing method and device | |
CN109260709B (en) | Dynamic route generation method and device | |
CN113706715B (en) | Random controllable city generation method | |
CN117516562B (en) | Road network processing method and related device | |
US11846517B2 (en) | Vector tile navigation | |
CN115779424A (en) | Navigation grid path finding method, device, equipment and medium | |
Mekni et al. | Hierarchical path planning for multi-agent systems situated in informed virtual geographic environments | |
CN110716547A (en) | 3D exploration method based on wavefront algorithm | |
CN114526738B (en) | Mobile robot visual navigation method and device based on deep reinforcement learning | |
CN115855086A (en) | Indoor scene autonomous reconstruction method, system and medium based on self-rotation | |
CN111830973B (en) | Mobile robot path planning method and device under dynamic environment | |
CN113867371A (en) | Path planning method and electronic equipment | |
CN117058358B (en) | Scene boundary detection method and mobile platform | |
CN117553804B (en) | Path planning method, path planning device, computer equipment and storage medium | |
CN117765201A (en) | Virtual object path planning method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40098080 Country of ref document: HK |