CN112221143B - Method, device and storage medium for controlling movement of virtual object - Google Patents

Method, device and storage medium for controlling movement of virtual object Download PDF

Info

Publication number
CN112221143B
CN112221143B CN202011071517.1A CN202011071517A CN112221143B CN 112221143 B CN112221143 B CN 112221143B CN 202011071517 A CN202011071517 A CN 202011071517A CN 112221143 B CN112221143 B CN 112221143B
Authority
CN
China
Prior art keywords
map
virtual object
target
key point
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011071517.1A
Other languages
Chinese (zh)
Other versions
CN112221143A (en
Inventor
黄超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011071517.1A priority Critical patent/CN112221143B/en
Publication of CN112221143A publication Critical patent/CN112221143A/en
Application granted granted Critical
Publication of CN112221143B publication Critical patent/CN112221143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application relates to the field of artificial intelligence, and provides a method, a device and a storage medium for controlling the movement of a virtual object, wherein the method comprises the following steps: acquiring a path searching task of a virtual object and acquiring a key point set of the virtual object in a first path searching map; the path searching task comprises at least one path searching path of a local map, and the key points in the key point set are positions of the virtual object which accord with preset movement behaviors in the movement of the target path; and acquiring the real-time position of the virtual object in the first routing map, controlling the virtual object to move from the real-time position to a target key point in the first routing map according to the routing task according to the key point set and the real-time position, and moving to a next key point in the effective range of the target key point until the routing task is completed. The scheme can change the visual angle of the game role at any time, deduces the position of the game role from the local map, and enables the game role to move according to a relatively fixed path.

Description

Method, device and storage medium for controlling movement of virtual object
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a method, a device and a storage medium for controlling the movement of a virtual object.
Background
In the gaming industry, game testing can be implemented based on Artificial Intelligence (AI) technology. The positions of game characters are mainly identified at present by adopting the following modes:
in the game AI scheme for setting the route based on the global game map, the positions of game characters in the global game map are identified through colors, then paths for the game characters to move in the global game map are manually preset, namely, key points on the paths are manually marked, and then the game characters are controlled to move along the positions of the key points on the paths, so as to complete game tests on the game characters. Although the positions of game characters in a complete game map can be identified through colors, and manual recording of samples is not needed, the game images contain complete game map information, most of the current gunfight games only contain local radar map information, and players cannot change the visual angles of the game characters, so that the game images and game actions at any visual angles cannot be recorded, the difficulty of obtaining the game images containing the complete game map is high and not comprehensive enough, the accurate positions of the game characters in the global game map are difficult to obtain, and finally the efficiency of the whole game test is reduced and the test effect is not comprehensive enough.
Disclosure of Invention
The embodiment of the application provides a method, a device and a storage medium for controlling the movement of a virtual object, which can change the visual angle of a game role at any time, deduce the position of the game role from a local game map, and enable the game role to move according to a relatively fixed path.
In a first aspect, an embodiment of the present application provides a method for controlling movement of a virtual object, where the method includes:
acquiring a route searching task of a virtual object and acquiring a key point set of the virtual object in a first route searching map; the routing task comprises routing paths of the virtual objects in at least one local map;
the key point set comprises a plurality of sequentially arranged key points, and the key points refer to positions of the virtual object, which accord with preset movement behaviors in the movement of a target path;
acquiring a real-time position of the virtual object in the first road-finding map;
controlling the virtual object to move from a real-time position to a target key point in the first way-finding map and move to a next key point in an effective range of the target key point according to the way-finding task and the real-time position of the virtual object in the first way-finding map according to the set of key points and the real-time position of the virtual object in the first way-finding map until the way-finding task is completed; the target key points are key points in the key point set, wherein the distance between the target key points and the real-time position is smaller than a preset distance.
In some embodiments, after obtaining a second gray image corresponding to the real-time pitch angle, the method further comprises:
and displaying an azimuth indication icon on the interactive interface, wherein the azimuth indication icon corresponds to the real-time pitch angle of the first-person visual angle.
In some embodiments, after the target matching position is obtained, the white area in the second grayscale image is fused with the game map area where the target matching position is located to obtain an updated stitched map, and the operation is repeated in this way, and the fusion operation (i.e., the stitching operation) of the next local map is continued, and the stitched map is continuously updated until the first way-finding map (i.e., the global game map, i.e., the stitched global map) is obtained by fusion. The white area is an area associated with the game map.
In some embodiments, the preset movement behavior comprises at least one of a start movement, a movement turn, or a stop movement;
the determining the location of the virtual object in the first routing map comprises:
when the movement behavior of the virtual object on the target path is in accordance with at least one of start movement, movement turning, or stop movement, matching a first local map and a first grayscale image of the first local map to a first way-finding map to match a position of the virtual object in the first way-finding map.
In a second aspect, an embodiment of the present application provides a virtual object movement control apparatus having a function of implementing the method for controlling movement of a virtual object provided corresponding to the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware.
In some embodiments, the virtual object movement control device includes:
the input and output module is used for acquiring a route searching task of a virtual object and acquiring a key point set of the virtual object in a first route searching map; the routing task comprises routing paths of the virtual objects in at least one local map; the key point set comprises a plurality of sequentially arranged key points, and the key points refer to positions of the virtual object, which accord with preset movement behaviors in the movement of a target path;
the input and output module is further used for acquiring the real-time position of the virtual object in the first routing map;
the processing module is used for controlling the virtual object to move from the real-time position to a target key point in the first routing map according to the routing task and the real-time position of the virtual object in the first routing map according to the key point set and the real-time position of the virtual object in the first routing map, and move to a next key point in the effective range of the target key point until the routing task is completed; the target key point is a key point in the key point set, wherein the distance between the target key point and the real-time position is smaller than a preset distance.
In another aspect, a virtual object movement control apparatus is provided, which includes at least one connected processor, a memory and a transceiver, where the memory is used for storing a computer program, and the processor is used for calling the computer program in the memory to execute the method according to the above aspects.
In yet another aspect, embodiments of the present application provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
According to an aspect of the application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method provided in the first aspect and the various embodiments of the first aspect.
Compared with the prior art, in the scheme provided by the embodiment of the application, the key point set comprises a plurality of key points which are arranged in order, and the key points refer to positions of the virtual object which accord with the preset movement behavior in the target path movement, so that the path finding task of the virtual object in the first path finding map can be smoothly realized when the virtual object is controlled to move according to the path finding task in the first path finding map according to the key point set and the real-time position of the virtual object in the first path finding map, and the test efficiency can be improved. In addition, compared with the prior art in which the positions of game characters are identified from a global game map based on colors, the real-time position of the virtual object obtained in the embodiment of the application is more accurate, and further, when the real-time position of the virtual object is controlled, the virtual object can move towards a key point on a routing path where the real-time position is located, and then, after the key point is reached, the virtual object can be controlled to continue to move towards the next key point until the whole routing task is completed. Therefore, by combining the key point set, the real-time position of the virtual object in the first way-finding map can be accurately positioned, so that the moving state (namely, the moving direction, the moving path and the like) of the virtual object in the first way-finding map can be better controlled, and therefore when the virtual object is controlled to move in the first way-finding map based on the scheme, automatic testing can be achieved, and testing efficiency and testing effect can be improved.
Drawings
FIG. 1a is a schematic illustration of an interface of a sample game in an embodiment of the present application;
FIG. 1b is a schematic interface diagram illustrating a game character moving from a current key point to a next key point in the embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for controlling movement of a virtual object according to an embodiment of the present disclosure;
FIG. 3a is a schematic diagram of a set of key points in an embodiment of the present application;
FIG. 3b is a flowchart illustrating a method for controlling movement of a virtual object according to an embodiment of the present application;
FIG. 4a is a flowchart illustrating a method for controlling movement of a virtual object according to an embodiment of the present disclosure;
FIG. 4b is a schematic view of an identification area corresponding to a viewing angle in an embodiment of the present application;
FIG. 4c is a flowchart illustrating a method for controlling movement of a virtual object according to an embodiment of the present application;
FIG. 5a is a schematic diagram illustrating an embodiment of obtaining candidate regions;
fig. 5b is a schematic diagram of a map spliced in a map drawing board in the embodiment of the present application;
FIG. 5c is a schematic diagram of a global P city map in an embodiment of the present application;
FIG. 6 is a flowchart illustrating a method for controlling movement of a virtual object according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a virtual object movement control apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an entity device of the method for controlling movement of a virtual object in the embodiment of the present application;
fig. 9 is a schematic structural diagram of a server in an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and claims of the embodiments of the application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise" and "have", and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed, but may include other steps or modules not explicitly listed or inherent to such process, method, article, or apparatus, such that the division into blocks presented in an embodiment of the present application is merely a logical division, and may be implemented in practice in other ways, such that multiple blocks may be combined or integrated into another system, or some features may be omitted, or not implemented, and such that shown or discussed as coupled or directly coupled or communicative with each other may be through interfaces, and such that indirect coupling or communicative coupling between blocks may be electrical or other similar, the embodiments of the present application are not limited thereto. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to implement the purpose of the embodiments of the present application.
The embodiment of the application provides a method, a device and a storage medium for controlling the movement of a virtual object, which can be used for realizing the artificial intelligent map running function in a game and providing a basis for a subsequent game test scene. The scheme can be used for a server side or a terminal device side, the embodiment of the application only takes the server as an example, and the server side is provided with the virtual object movement control device. It should be understood that, in the embodiments of the present application, the interactive application is taken as an example of a game, and correspondingly, the processing of a game map is taken as an example in the following, and all other types of interactive applications can refer to the processing manner for the game map, which is not limited in the embodiments of the present application. The games in this scheme may include, but are not limited to, first-person shooting games (FPS), running games, massive Multiplayer Online Role Playing Games (RPG), Multiplayer Online tactical sports games (MOBA), Music games (MSC), sports games (SPG), and the like.
The solution of the embodiment of the present application is based on an Artificial Intelligence (AI) technology, and some basic concepts in the AI field will be introduced below. AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, AI is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. AI is to study the design principles and implementation methods of various intelligent machines, so that the machine has the functions of perception, reasoning and decision making. The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. The artificial intelligence base technologies generally include technologies such as sensors, dedicated AI chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the process of game testing operation, a main task network can be used, the main task network is obtained based on Machine Learning (ML) training, along with the research and progress of AI technology, the AI technology is developed and researched in various directions, and the Machine Learning is a multi-field cross subject and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of AI, and is a fundamental approach to make computers intelligent, which is applied throughout various fields of AI. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
When a map in a game image is identified by using a main task network, a Computer Vision (Computer Vision) technology is involved, CV is a science for researching how to make a machine look, and in particular, CV is the science for replacing human eyes to identify, track, measure and the like machine Vision of a target and further performing graphic processing so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes technologies such as image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction and the like, and also includes common biometric technologies such as face recognition, fingerprint recognition and the like.
It should be particularly noted that the server according to the embodiment of the present application may be an independent physical server, may be a server cluster or a distributed system formed by multiple physical servers, and may also be a cloud server providing a cloud computing service. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The embodiment of the application mainly provides the following technical scheme:
1. the game character is controlled to move around the target location (for example, make a round), and game samples appearing within the visual angle range of the game character during the movement are recorded. The game sample may refer to an interface schematic as shown in FIG. 1 a.
2. And generating a corresponding mask according to the game role and the view angle color in the local game map in the game sample so as to remove the information irrelevant to the game map.
3. And controlling the game role to move according to the target path, matching the game role at the starting point, the turning point and the ending point of the path based on the template with the mask and the local game map, deducing the key point position of the game role in the game map, and recording the key point position sequence as the path.
4. And splicing the plurality of local game maps based on the key point sequence to obtain a complete game map (namely, a global game map).
5. In the testing stage, according to the position of the game character in the game map and the key point sequence, the game object is controlled to move towards the next key point so as to realize the running map task of the game. For example, as shown in FIG. 1b, FIG. 1b shows a schematic view of an interface where a game character is running from a current key point toward a next key point.
In the following, preparation work before testing the interactive application is described, such as recording an interactive sample, drawing a global routing map, and setting a set of key points, i.e. actual control attributes set in the interactive application for each virtual object. Specifically, as shown in fig. 2, before controlling the movement behavior of the virtual object in the first routing map according to the routing task, the embodiment of the present application includes:
101. a first instruction of a user is received.
The first instruction is used for controlling the virtual object to move according to the target path.
The target path is a path-finding path customized for the virtual object according to the object characteristics of the virtual object, the virtual object moves to an end point along the target path, and a complete path-finding task can be completed once. For example, if the virtual object is a hero character, the hero character will appear for the first time at location a and then perform the routing task until the last appearance at location B. Then the path that the hero character appears in the map to seek from location a to location B can be set as the target path.
102. And responding to the first instruction, and controlling the virtual object to move according to a target path.
The target path refers to a path-finding path customized for a specific virtual object (for example, the above virtual object) when a path-finding task is set for the specific virtual object in the game application, and the specific virtual object needs to move along the target path, so that the target path can be bound to the specific virtual object, and the game application can move according to the target path in a test stage or an online stage.
For example, a game character is manually controlled (e.g., controlled by the first instruction) to perform a way-finding according to a target path to complete a way-finding task on the target path. Since the center of the local game map is generally a game character, for example, when the game is closed feather, a running chart task needs to be tailored for the closed feather of the specific character, and the running chart task (for example, a target path) is bound to the closed feather, so that the closed feather moves along the fixed target path in the game. In the embodiment of the present application, the target path may be composed of multiple paths, and the direction, connection mode, and connection point of the multiple paths are not limited in the embodiment of the present application, so long as the virtual object reaches the end point when the path finding task is completed.
In the embodiment of the application, in response to the condition or state indicating that the executed operation depends on, when the condition or state depending on is satisfied, one or more executed operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
103. When the movement behavior of the virtual object accords with preset movement behavior, the first partial map and the first grayscale image of the first partial map are matched with the first path-finding map, so that a first position of the virtual object in the first path-finding map is obtained.
104. And taking the first position as a preset key point, and updating the preset key point to a key point set.
Specifically, since the virtual object moves in real time in the first routing map, the position of the virtual object in the first routing map also changes in real time, and the tracing task corresponds to a plurality of positions indicating features such as preset movement behaviors of the virtual object, namely, key points. Therefore, when the virtual object is controlled to move, if the preset moving behavior is loaded every time the virtual object changes the moving behavior, the position where the moving state is changed is regarded as a key point, and the key point is updated to the key point set every time one key point is obtained. That is to say, a plurality of first positions are obtained, and therefore, each first position of the virtual object, which meets a preset movement behavior in the movement of the target path, is taken as a preset key point, so that the key point set including all key points in the target path can be obtained.
The preset movement behavior may include at least one of movement behaviors of starting, stopping, or turning. The starting may be starting the virtual object at any position, may include starting at an initial position in the way finding, and may also include restarting the movement after stopping in the way finding. Stopping is an action of stopping the current moving state of the virtual object in the way finding process. The steering refers to the behavior of switching the virtual object from the current moving state to a new moving state by changing the moving direction in the route searching process.
The key point set comprises a plurality of key points which are arranged in order, the key point set can also comprise at least one key point under certain scenes, and the key point refers to a position of the virtual object which accords with a preset movement behavior in the movement of the target path. Each keypoint of the set of keypoints corresponds to one unique location. The set of key points represents a complete path for the virtual object to move from the first location to the second location.
Since the preset movement behavior can represent the change of the movement state of the virtual object in the movement, the key point determined based on the preset movement behavior is a key turning point when the virtual object seeks in a path finding task, and then a basis can be provided for controlling the movement of the virtual object in the embodiment corresponding to fig. 6 according to the set of key points, that is, the virtual object is controlled to move towards the next key point until the entire path is run.
After the positions of the game characters are obtained, the game samples from the points A to the points B can be recorded. Specifically, the key points corresponding to the starting point, the turning point and the end point are respectively recorded and stored in a time series manner so as to keep the context of the key points. That is, every time a key point position is reached, the key point position is updated into the key point sequence by identifying the position of the game character in the mosaic map. The final generated path keypoint sequence is shown in fig. 3 a. The solid points correspond to key points, and the solid lines correspond to paths along which the game characters move.
It can be understood that in the process of obtaining the key point set, a game sample can be recorded in real time to make a record of the virtual object when executing the routing task according to the target path, so as to fix the attribute of the virtual object in the interactive application, on one hand, the effect of the virtual object when executing the routing task can be tested in the subsequent testing stage, and on the other hand, a reference for learning or knowing the virtual object can be provided for the user after the virtual object is subsequently on-line. Specifically, a plurality of game map images are acquired from the interactive interface.
In some embodiments, the game map image may be a radar map image that presents radar direction indications of at least two game characters in a local map at the same time. For example, the game map image is a local area of the global game map, and includes an arrow of a game character, a first-person viewing angle area of the game character, and a directional arrow of a teammate of the game character.
In some embodiments, the obtaining a plurality of game map images from the interactive interface includes:
acquiring a plurality of game images of the virtual object when the virtual object is interacted in an interactive interface;
removing a target image from the plurality of game images, the target image comprising a virtual object in a game map and interference information within a first-person viewing angle region of the virtual object;
and obtaining the plurality of game map images according to the game images with the target images removed.
The game image is an image for composing or presenting a visual form of an object, a scene, a road, a building, and the like in a certain area in the interactive application. Such as a haystack, a road, a river, a house, a car, etc. of facilities or items that appear in a game scene.
The first-person perspective region is a perspective of a virtual object, and is an initial perspective of a game if a game metaphor is used, and only a region other than the user can be seen in the first-person perspective region, that is, the virtual object cannot see the whole body of the virtual object itself, and therefore, it is necessary to remove images that are not in the first-person perspective region of the virtual character.
In some embodiments, to facilitate accurate determination of the actual position of the virtual object, a mask template is generated prior to testing, thereby providing a basis for acquiring the keypoints. Therefore, as shown in fig. 3b, before obtaining the set of keypoints, the method further includes:
201. and receiving a second instruction of the user.
The second instruction is used for indicating to control the virtual object to move in the effective range of the target position.
The target position refers to any position of the current virtual object in the map, for example, an initial position of the virtual object when the virtual object is placed in the map, and the specific target position is preset based on object characteristics of the virtual object or a routing task, and is not limited thereto. For example, if the virtual object is a hero character, the hero character appears at a certain position for the first time, and then the routing task is performed, the position where the hero character should appear in the map for the first time can be set as the target position.
The effective range is to limit the moving range of the virtual object so as to obtain the game image of the virtual object in the moving process in the effective range by controlling the movement of the virtual object when the target position is taken as the center, and further obtain the game sample of the local map.
For example, the game character is controlled to move within a circle having a diameter of 3 meters with the center point of the target city area (the center coordinates of the target city area are (2, 3)) as the center point. For example, a game character is controlled to move around a target urban area, and images during the game are recorded.
202. And controlling the virtual object to move within the effective range of the target position in response to the second instruction.
The movement behavior of the virtual object can be controlled by a handle, keyboard shortcut keys, keyboard direction keys, gestures and the like. The virtual object is moved in the effective range of the target position, so that when the virtual object is moved to the effective range of the target position in the later period, the virtual object can be directly controlled to take any position in the effective range as a current key point, then the virtual object is controlled to move towards the next key point of the current key point according to the sequence among the key points in the key point set, and the like, so that accurate automatic path finding is realized.
203. And acquiring at least one interaction sample in the effective range.
Wherein the interactive sample refers to game material appearing within a range of viewing angles from a first-person perspective of the virtual object. For example, the interactive sample is game materials appearing in a visual angle range of a first-person visual angle of the game character, the game character is controlled to move for a circle around the target position, and the game materials which can be acquired in the visual angle range in the circle process are recorded. The image can be obtained by video recording or photographing, and the embodiment of the application is not limited to this. For example, the interactive sample may be a game map image of the game character during the game.
204. And generating a first gray image of the first local map according to the first local map where the target position is located and the color information of each interactive sample.
The first partial map is a map within a certain area range of the target position, and belongs to one part of the spliced map. For example, the target position is a certain mountain, and the local map of the mountain is a map within the view angle range of the first-person view angle of the virtual object, which is rotated in a full angle around the mountain. The local map of the mountain village is the local map of the game map.
In the embodiment of the present application, the grayscale image may be replaced by an N-valued image, for example, a mask image obtained by binarizing the grayscale image. In the embodiment of the present application, the processing such as binarization and penta-quantization may be performed on the grayscale image corresponding to the interactive sample, which is not limited herein, and details of the similar parts are not repeated, and specifically, reference may be made to a schematic diagram shown in fig. 4a as fig. 2.
In some embodiments, a gray scale image from a real-time viewing angle can be obtained from the gray scale image. Specifically, generating a grayscale image corresponding to each interactive sample according to a first local map where the target position is located and color information of each interactive sample respectively includes:
a. and acquiring a first local map when the pitch angle is smaller than a preset angle (for example, the pitch angle is 0) according to the real-time position of the virtual object.
b. And acquiring a second gray image corresponding to the first local map when the pitch angle is smaller than a preset angle.
c. And acquiring a real-time pitch angle of the first-person visual angle.
d. And rotating the second gray level image according to the real-time pitch angle to obtain a first gray level image corresponding to the real-time pitch angle.
For example, in the case where the gray-scale image is replaced with a mask image, as shown in fig. 4a, a mask image when the angle of view of the game character is 0 is prepared in advance. Fig. 4a shows a local game map with an angle of view of 0 as fig. 1, and fig. 4a shows a mask image M corresponding to the local game map as fig. 2. The white area in the mask image M represents a game map-related area.
After obtaining the mask image M with the viewing angle of 0, the current viewing angle (which may be called a pitch angle) of the game character is identified, and the mask image M is rotated according to the viewing angle to obtain a mask image M1 corresponding to the viewing angle, where the identification area corresponding to the viewing angle is shown by a rectangular solid frame in fig. 4 b. By analogy, the mask images with the visual angles in the angle ranges of 0 to 9 are collected, and the local map is the radar map with the directional directivity, so that the numbers in the rectangular solid frame shown in fig. 4b can be recognized through local map matching, the numbers corresponding to the visual angles are further obtained, then the mask images are rotated according to the visual angles, the mask images corresponding to the current visual angles are obtained, and a basis is provided for obtaining a spliced map in the map drawing board subsequently.
In the embodiment of the present application, a circle (for example, a circle denoted by reference numeral 1 in fig. 4 a) of a game character and a white sector area (for example, fig. 4a denoted by fig. 1) related to a viewing angle are included in a local game map (for example, a first local map and a second local map).
In some embodiments, after the second gray image corresponding to the real-time pitch angle is obtained, an azimuth indication icon may be further displayed on the interactive interface after the second gray image is obtained, so that a user can visually read the azimuth of the current virtual object in real time, where the azimuth indication icon corresponds to the real-time pitch angle of the first-person view angle.
In other embodiments, in order to reduce the area matched with the map drawing board based on the first partial map, the second gray scale image may be further subjected to erosion processing, and specifically, after obtaining the second gray scale image corresponding to the real-time pitch angle, the method further includes:
and carrying out corrosion treatment on the first gray level image to obtain a corroded first gray level image.
For example, the mask image M1 is subjected to erosion processing, that is, a region (for example, a white region) having a gray-scale value of 0 in the mask image M1 is reduced to generate a new mask image M2. In some embodiments, the following etching treatments may be used: for the mask image M1, the middle of the mask image M1 is the structural element B, the center point of the structural element B is the position of the current processing element, the center point of the structural element B is compared with the pixel points on the mask image M1 one by one, if all the points on the structural element B are within the range of the mask image M1, the black pixel points are retained, otherwise, the black pixel points are removed, and finally, the mask image M2 is obtained. After the etching process, the pixel points in the mask image M2 are still within the range of the original mask image M1, and contain fewer pixel points than the mask image M1, as if the mask image M1 was etched away by one layer.
As can be seen, by reducing the area of the mask image M1 having a gray scale value of 0 through the erosion operation, the area of the matching image can be reduced, and the existence of the matching area in the first road map (for example, a complete game map obtained through operations such as stitching and composition) can be ensured.
205. And matching the first local map with the first gray image in the map drawing board to obtain a target matching position of the first local map in the map drawing board.
The target matching position may be any vertex in the first local map, for example, a vertex in the upper left corner of the first local map.
The map drawing board in the embodiment of the application can adopt the completely black image, so that matching and covering of the local map corresponding to the gray level image on the map drawing board are facilitated, and the global first way-finding map can be obtained more quickly and accurately.
206. And according to the target matching position, covering a white area in the first gray level image on the map drawing board so as to update a spliced map in the map drawing board.
Optionally, in some embodiments of the application, in order to reduce an error of a target matching position of the first local map matched on the map drawing board, a target area may be further cut from a spliced map in the map drawing board according to the real-time position, and then the first local map of the current game image and the eroded mask image M2 are matched in the target area, so that a best matching position of the local map on the map drawing board may be obtained. Specifically, the embodiment of the application comprises a step (1) to a step (3):
(1) and acquiring a history matching position.
The history matching position refers to a position where the second local map is matched in the map drawing board when the virtual object is on the second local map, that is, the history matching position refers to a position where the second local map and the second local map are matched on the map drawing board when the virtual object is on the second local map. The history matching position comprises a first coordinate of the virtual object in a first direction and a second coordinate of the virtual object in a second direction.
For example, the history matching position can be the position matched for n times on the game map, and n is a positive integer. Because the moving speed of the game role is limited, the matching position of the same game role in the next frame and the matching position of the same game role in the last frame have no great difference, and therefore, the historical matching position can adopt the matching result of the previous frame of the current frame, the matching calculated amount can be reduced, the matching speed is further increased, and meanwhile, the position matching precision is improved.
(2) And drawing a target area in the map drawing board according to the historical matching position.
Specifically, this can be achieved by the following operations (a) to (c):
(a) the first coordinate is moved in the first direction by a first distance, and the second coordinate is moved in the second direction by a second distance to form a candidate region.
As shown in fig. 5a, the X coordinate may be shifted left by 50 pixels and the Y coordinate may be shifted up by 50 pixels to obtain the candidate region. Wherein 50 pixels are set according to the distance that the game character moves within 1 second. The pixels moving left and right are determined according to the moving speed of the game role, so that the intercepted local map is ensured to contain the current local map.
The pixel variation of the X coordinate and the Y coordinate may be the same or different, as long as it is ensured that the intercepted target area (i.e., the intercepted local map) includes the current local map, which is not limited in the embodiment of the present application. For example, the candidate area is cut out from the current stitched map in the map drawing board by taking X-50 as the abscissa of any vertex (for example, vertex left) in the local map, Y-50 as the ordinate of any vertex (for example, vertex left) in the local map, and the width and height are the width of the local game map plus 100.
(b) And obtaining the target width according to the first distance and the width of the second local map, and obtaining the target length according to the second distance and the length of the second local map.
The target width refers to the width of a target area to be drawn, and the target length refers to the length of the target area to be drawn.
(c) And drawing a target area by taking the candidate area as a central area.
And the target area is used for being matched with the mask image and the local map so as to obtain the current real-time position of the virtual object in the first road-finding map. That is, the target area is a partial area cut out from the first route search map (i.e., the global game map obtained by the aforementioned stitching).
And (4) when the target area is drawn, referring to the target width and the target length obtained in the step (3). That is, after the target area is drawn, the width of the target area is the target width, and the length of the target area is the target length. For example, the width of the matrix area is the width of the local game map plus 100, and the height is the height of the local game map plus 100.
(3) And matching the first local map and the first gray image in the target area to obtain the target matching position.
In some embodiments, after the target area is drawn, in order to more accurately match the real-time position of the virtual object in the first routing map in the later retest stage, considering that only a part of the pixel area in the game map is an image related to the game map, the most matched position of the first local map in the map drawing board, that is, the target matching position, may be found from the first routing map in this stage. Specifically, the method comprises the following steps:
adding the first local map and the first gray-scale image (which may also be a first gray-scale image after the erosion processing) into the target region;
and matching the first local map and the first gray image in the target area to obtain a target matching position of the first local map in the map drawing board.
In the embodiment of the application, the initial state of the map drawing board is a blank drawing board, and the blank drawing board is used for splicing each local map in the game application, and finally the first road-finding map is obtained through splicing. The first routing map refers to a local game map corresponding to a certain area in a global game map applied during the whole interaction of the current virtual object, and the virtual object needs to perform routing according to a preset routing path in the local game map to complete a section of routing task of the virtual object on the first routing map.
For the map drawing board in the initial state, when the first local map is covered on the map drawing board, a starting position may be set in the map drawing board, and a certain vertex of the first local map is overlapped with the starting position.
(4) And according to the target matching position, covering the white area in the first gray level image on the map drawing board so as to update the spliced map in the map drawing board.
Specifically, after a target matching position is obtained, a white area in the first gray image and a game map area where the target matching position is located are fused to update a spliced map in the map drawing board, and the operations are repeated in this way, and the fusion operation (i.e., splicing operation) of the next local map is continued, so that the spliced map is continuously updated until the first route-finding map (i.e., a global game map, i.e., a spliced global game map) is obtained through fusion. The white area is an area associated with the game map. For example, referring to a stitch diagram as shown in FIG. 5b, the history matching location represents the second local map. The target matching location represents the first partial map.
For example, after a rectangular area is selected from a game map (i.e., a global map obtained by stitching, which is simply referred to as a stitched map), the local game map of the current image and the mask image M2 after erosion processing are matched in the rectangular area, so that the best matching position can be obtained. Then, the image corresponding to the white area in the mask image M1 is merged with the game map area with the highest matching degree, and one formula of the merging can refer to the following formula:
Figure BDA0002715167660000161
wherein p is1A pixel value representing a mosaic map, c representing the number of times the pixel appears in a white area of the mask during the mosaic process, p2Representing the pixel values of the current local game map. The fused formula is only an example, and any modification such as adding, deleting, or changing parameters may be performed on the fused formula, and the embodiment of the present application is not limited.
Therefore, the matching precision can be improved by adopting the local map to match the target area intercepted from the spliced map, and the calculation amount caused by the matching of the whole map can be reduced because the local map does not need to be matched with each area of the spliced map.
The image corresponding to the white area in the mask image M1 is fused with the game map area with the highest matching degree, so that the noise in the local game map, such as the bullet in fig. 4c, can be effectively filtered.
And continuously splicing the game images according to the process to finally obtain a spliced map, namely the spliced game map. For example, based on the embodiments shown in fig. 4 a-4 c, etc., the game images are stitched to obtain a stitched P city map, and an example of the stitched global P city map may refer to a schematic diagram shown in fig. 5 c.
After the initial settings of recording the interactive sample, drawing a global route-finding map, setting a set of key points, and the like are completed, the interactive application may be tested, for example, the virtual object may be applied to a test to test whether the virtual object strictly performs route-finding according to a preset route-finding task, whether the setting of the key points has a deviation, and the like. Specifically, referring to fig. 6, a method for controlling movement of a virtual object according to an embodiment of the present application is described below, where the embodiment of the present application includes:
301. the method comprises the steps of obtaining a route searching task of a virtual object and obtaining a key point set of the virtual object in a first route searching map.
The virtual object refers to a role object played in the interactive application, and the virtual object can interact with other virtual roles in the interactive application. For example, in a game scenario, the virtual object may be a game character, and two game characters may attack, catch up, etc. each other. The time-user (also referred to as the player) that controls the virtual object. The virtual object may also be referred to as a virtual character, a player, a virtual user, and the like, which is not limited in this embodiment of the application.
The routing task comprises routing paths of the virtual objects in at least one local map. The way finding task may be for one or at least two way finding paths. For example, in a game scene, for a local map a, when a game character a enters the local map a, the game character needs to move from one point to another point according to a preset routing path, and finally the game character can run out of the local map a, walk out of the local map a, and enter another adjacent local map B.
The first route-seeking map refers to a local map corresponding to a certain area in a global map applied by a current virtual object during whole interaction, and the virtual object needs to seek a route in the local map according to a preset route-seeking path to complete a route-seeking task of the virtual object in the first route-seeking map.
The key point set comprises a plurality of key points which are arranged in order, and the key points refer to positions of the virtual object which accord with preset movement behaviors in the movement of the target path.
302. And acquiring the real-time position of the virtual object in the first road-finding map.
Since the virtual object may be continuously moving in the first road-finding map, the real-time position of the virtual object correspondingly follows the movement of the virtual object and is continuously changed. For example, the game character is at 11: magic castle in game map at 20 minutes, 11: and 22, the game character is in the underground river in the game map.
In some embodiments, the obtaining the real-time position of the virtual object in the first road-finding map comprises:
identifying the current view angle of the game character, and rotating the mask image M according to the view angle to obtain a corresponding mask image;
and performing preset mask template matching in the mosaic map based on the local game map and the mask image to find the most matched position (x, y).
Since the center of the local game map is the game character, the x and y coordinates are added to half the width and height of the local game map, respectively, to derive the position of the game character in the mosaic map.
303. And controlling the virtual object to move from the real-time position to a target key point in the first routing map according to the routing task and the real-time position of the virtual object in the first routing map, and to move to a next key point in the effective range of the target key point until the routing task is completed.
The target key point may be a key point in the set of key points, where a distance from the real-time position is smaller than a preset distance.
And controlling the virtual object to move from a real-time position to a target key point in the first road-finding map according to a road-finding task, and moving to a next key point in an effective range of the target key point until the road-finding task is completed, wherein the moving behavior of the virtual object can also be simply controlled. The movement behavior refers to information such as movement, speed, and direction when the virtual object moves in the first road map. For example, a movement behavior may refer to a game character starting, stopping, turning, or the like in a game map.
In some embodiments, when the virtual object is controlled to perform a corresponding way-finding task due to a change of the way-finding path in the way-finding map, the moving behavior of the virtual object may include starting, stopping or turning, and the moving behaviors are hubs connecting two adjacent way-finding paths.
Typically, each virtual object moves to many locations in the game map, and thus, a set of keypoints typically includes at least two keypoints. When the key point set comprises at least two key points, the flow of controlling the virtual object to move from one key point to another key point is described by taking the two key points as an example. Specifically, the set of keypoints includes a first keypoint and a second keypoint, the first keypoint precedes the second keypoint in the order of the first keypoint in the set of keypoints, and the second keypoint is a next keypoint to the first keypoint, that is, the first keypoint is adjacent to the second keypoint. The first key point can be regarded as the target key point, and the determination of the specific target key point depends on the real-time position of the virtual object. The movement behavior of the virtual object in the first routing map according to the routing task can be controlled according to the following operations:
when the real-time position of the virtual object is within the effective range of the first key point, controlling the virtual object to move towards the second key point. The effective range of the first key point can be represented by the fact that the distance between the real-time position and the first key point is smaller than a preset distance. The shape, size, and the like of the effective range of any keypoint in the keypoint set (e.g., the aforementioned target keypoint, the first keypoint) are not limited in the embodiments of the present application.
For example, a real-time position of a game character is matched on a map drawing board based on a mask image and a local map, then a first key point (for example, a turning point) closest to the real-time position is judged, and the game character is controlled to move towards the turning point, when the real-time position of the game character is within a radius range of 0.5m from the turning point (namely, the first key point), the game character can be controlled to run towards a stop point (namely, a second key point), and so on, when the real-time position of the game character is close to one key point, the game character can be controlled to move towards the next key point.
Therefore, the whole path finding path is controlled to run out until the whole path finding task is run out, so that the whole task of the game role is automatically run out, and a foundation is provided for recording and automatic testing.
Compared with the prior art, the scheme provided in the embodiment of the application is mainly embodied in the following aspects:
1. the method comprises the steps that a route searching task is controlled according to a route searching task in a first route searching map, and a plurality of key points are arranged in sequence, wherein the key points refer to positions of a virtual object which accord with preset moving behaviors in the target path movement, so that the route searching task of the virtual object in the first route searching map can be smoothly realized when the virtual object is controlled according to the moving behaviors of the route searching task in the first route searching map according to the key point set and the real-time positions of the virtual object in the first route searching map, and a basis is provided for application in subsequent test interaction.
2. And generating a game map according to the local game map in the recorded game sample, matching the position of the game role on the game map through a template with a mask, splicing the game map, and providing a basis for realizing a follow-up path finding task.
3. The key points appearing in the route finding path are judged by setting the preset movement behavior and combining the position of the virtual object, so that the key point set of the whole route finding map can be accurately and quickly obtained finally. Therefore, in the application stage of testing the interaction, the virtual object can complete the routing task set by the application in the interaction based on the matching position of the mask template and the key point set.
4. Based on key points in the path finding path, the positions of the virtual objects can be quickly deduced through splicing the game map according to the game samples recorded in the small sections and the local game map and matching the mask template, so that the key points are recorded, and the running chart function of the virtual objects is finally realized.
Any technical feature mentioned in the embodiment corresponding to any one of fig. 1a to 6 is also applicable to the embodiments corresponding to fig. 7 to 9 in the embodiment of the present application, and similar parts are not repeated herein.
In the above description, a method for controlling the movement of a virtual object in the embodiment of the present application is described, and a device for executing the method for controlling the movement of a virtual object is described below.
Referring to fig. 7, a schematic structural diagram of a virtual object movement control apparatus 70 shown in fig. 7 can be applied to a game test scenario. The virtual object movement control device 70 in the embodiment of the present application can implement the steps corresponding to the method for controlling the movement of the virtual object performed in the embodiment corresponding to any one of fig. 1a to 6 described above. The virtual object movement control apparatus 70 includes a processing module 701 and an input/output module 702:
the input/output module 702 is configured to obtain a routing task of a virtual object and obtain a set of key points of the virtual object in a first routing map; the routing task comprises routing paths of the virtual objects in at least one local map; the key point set comprises a plurality of sequentially arranged key points, and the key points refer to positions of the virtual object, which accord with preset movement behaviors in the movement of a target path;
the input/output module 702 is further configured to obtain a real-time position of the virtual object in the first road-finding map;
the processing module 701 is configured to control the virtual object to move from a real-time position toward a target key point in the first way finding map according to the way finding task and according to the set of key points and the real-time position of the virtual object in the first way finding map, and move to a next key point within an effective range of the target key point until the way finding task is completed; the target key points are key points in the key point set, wherein the distance between the target key points and the real-time position is smaller than a preset distance.
In some embodiments, the set of keypoints comprises a first keypoint and a second keypoint, the first keypoint being in the set of keypoints in an order prior to the second keypoint;
the processing module 701 is specifically configured to:
when the real-time position of the virtual object is within the effective range of the first key point, controlling the virtual object to move towards the second key point.
In some embodiments, before the input/output module 702 obtains the set of key points of the virtual object in the first routing map, the processing module 701 is further configured to:
acquiring a plurality of game map images from an interactive interface through the input/output module 702;
synthesizing the multiple game map images into a first road-finding map;
receiving a first instruction of a user through the input and output module 702;
responding to the first instruction, and controlling the virtual object to move according to a target path;
when the movement behavior of the virtual object accords with the preset movement behavior, matching the first partial map and a first grayscale image of the first partial map with the first way finding map to obtain a first position of the virtual object in the first way finding map;
setting the first position as a preset key point, and updating the preset key point to the key point set.
In some embodiments, before the input/output module 702 obtains the set of key points of the virtual object in the first routing map, the processing module 701 is further configured to:
receiving a first instruction of a user through the input and output module 702;
responding to the first instruction, and controlling the virtual object to move according to a target path;
when the movement behavior of the virtual object accords with the preset movement behavior, matching the first local map and a first gray image of the first local map with the first route searching map to obtain a first position of the virtual object in the first route searching map;
setting the first position as a preset key point, and updating the preset key point into the key point set.
In some embodiments, the processing module 701 is specifically configured to:
acquiring a plurality of game images of the virtual object when the virtual object is interacted in an interactive interface through the input/output module 702;
removing a target image from the plurality of game images, wherein the target image comprises a virtual object in a game map and interference information in a first person view angle area of the virtual object;
and obtaining the plurality of game map images according to the game images with the target images removed.
In some embodiments, before the processing module 701 obtains the set of key points, the processing module is further configured to:
receiving a second instruction of the user through the input and output module 702;
controlling the virtual object to move within the effective range of the target position in response to the second instruction;
acquiring at least one interactive sample in the effective range, wherein the interactive sample refers to an interactive game appearing in a visual angle range of a first-person visual angle of the virtual object;
according to the first local map of the target position and the color information of each interactive sample,
generating a first grayscale image of the first partial map;
and matching the first partial map and the first gray-scale image in the map drawing board to obtain a target matching position of the first partial map in the map drawing board.
And according to the target matching position, covering a white area in the first gray level image on the map drawing board so as to update a spliced map in the map drawing board.
In some embodiments, the processing module 701 is specifically configured to:
acquiring a first local map when the pitch angle is smaller than a preset angle according to the target position;
acquiring a second gray image corresponding to the first local map when the pitch angle is smaller than a preset angle;
acquiring a real-time pitch angle of the first-person visual angle;
and rotating the second gray level image according to the real-time pitch angle to obtain a first gray level image corresponding to the real-time pitch angle.
In some embodiments, after obtaining the first grayscale image corresponding to the real-time pitch angle, the processing module 701 is further configured to:
and carrying out corrosion treatment on the first gray level image to obtain a corroded first gray level image.
In some embodiments, the virtual object movement control device 70 further includes a display module (not shown in fig. 7) configured to, after the processing module 701 obtains the second gray scale image corresponding to the real-time pitch angle:
and displaying an azimuth indication icon on the interactive interface, wherein the azimuth indication icon corresponds to the real-time pitch angle of the first-person visual angle.
In some embodiments, the processing module 701 is further configured to:
acquiring coordinate data of a historical matching position, wherein the historical matching position is a matching position of a second local map in the map drawing board when the virtual object is in the second local map;
drawing a target area in the map drawing board according to the historical matching position;
and matching the first local map and the first gray-scale image in the target area to obtain the target matching position.
In some embodiments, the coordinate data includes a first coordinate of a first direction and a second coordinate of a second direction; the processing module 701 is specifically configured to:
moving the first coordinate a first distance in the first direction and the second coordinate a second distance in the second direction to form a candidate region;
obtaining a target width according to the first distance and the width of the second local map, and obtaining a target length according to the second distance and the length of the second local map;
and drawing a target area on the map drawing board by taking the candidate area as a central area, wherein the width of the target area is the target width, the length of the target area is the target length, and the target area is used for being matched with the first local map to obtain the current real-time position of the virtual object in the first routing map.
In some embodiments, after the processing module 701 renders the target region, it is further configured to:
adding the first local map and the first grayscale image into the target region;
matching the second local map and the first gray level image on the target area to obtain a target matching position;
and fusing the white area in the first gray level image with the game map area where the target matching position is located, so as to update the spliced map in the map drawing board.
The virtual object movement control apparatus 70 in the present embodiment is described above from the perspective of a modular functional entity, and the servers that execute the method of controlling the movement of a virtual object in the present embodiment are described below from the perspective of hardware processing, respectively. It should be noted that, in the embodiment shown in fig. 7 of this application, the entity device corresponding to the input/output module 702 may be an input/output unit, a transceiver, a radio frequency circuit, a communication module, an output interface, and the like, the entity device corresponding to the processing module 701 may be a processor, and the entity device corresponding to the display module may be a display screen. The virtual object movement control device 70 shown in fig. 7 may have a structure as shown in fig. 8, when the virtual object movement control device 70 shown in fig. 7 has a structure as shown in fig. 8, the processor, the input/output unit, and the display screen in fig. 8 can implement the same or similar functions of the processing module 701, the input/output module 702, and the display module provided in the device embodiment corresponding to the virtual object movement control device 70, and the memory in fig. 8 stores computer programs that the processor needs to call when executing the method for controlling the movement of the virtual object.
Fig. 9 is a schematic diagram of a server 900 according to an embodiment of the present disclosure, where the server 900 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 922 (e.g., one or more processors) and a memory 932, and one or more storage media 930 (e.g., one or more mass storage devices) for storing applications 942 or data 944. Memory 932 and storage media 930 can be, among other things, transient storage or persistent storage. The program stored on the storage medium 930 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 922 may be provided in communication with storage medium 930 to execute a sequence of instruction operations in storage medium 930 on server 920.
The Server 920 may also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input-output interfaces 957, and/or one or more operating systems 941, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
The steps performed by the server in the above embodiment may be based on the structure of the server 920 shown in fig. 9. The steps performed by the virtual object movement control apparatus 70 shown in fig. 9 in the above-described embodiment may be based on the server configuration shown in fig. 9, for example. For example, the processor 922, by invoking instructions in the memory 932, performs the following:
acquiring a route searching task of a virtual object and acquiring a key point set of the virtual object in a first route searching map through an input/output interface 957; the routing task comprises routing paths of the virtual objects in at least one local map; the key point set comprises a plurality of sequentially arranged key points, and the key points refer to positions of the virtual object, which accord with preset movement behaviors in the movement of a target path;
and obtaining a real-time position of the virtual object in the first routing map through an input-output interface 957;
controlling the virtual object to move from a real-time position to a target key point in the first routing map according to the routing task and a real-time position of the virtual object in the first routing map according to the key point set and the real-time position of the virtual object in the first routing map, and moving to a next key point in an effective range of the target key point until the routing task is completed; the target key point is a key point in the key point set, wherein the distance between the target key point and the real-time position is smaller than a preset distance.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the system, the apparatus, and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one position, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The technical solutions provided in the embodiments of the present application are described in detail above, and the embodiments of the present application use specific examples to explain the principles and implementations of the embodiments of the present application, and the descriptions of the embodiments are only used to help understand the methods and core ideas of the embodiments of the present application; meanwhile, for a person skilled in the art, according to the idea of the embodiment of the present application, there may be a change in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the embodiment of the present application.

Claims (11)

1. A method of controlling movement of a virtual object, the method comprising:
acquiring a route searching task of a virtual object, and acquiring a key point set of the virtual object in a first route searching map according to the matching operation of the first local map and a first gray image of the first local map on the first route searching map; wherein the routing task comprises routing paths of the virtual objects in at least one local map; the first path-finding map is obtained by splicing each local map in the game application through a map drawing board; the first gray image is a gray image corresponding to a real-time pitch angle of the virtual object at a first person viewing angle; the key point set comprises a plurality of key points which are arranged in order, and the key points refer to positions of the virtual object which accord with preset movement behaviors in the movement of the target path;
acquiring a real-time position of the virtual object in the first road-finding map;
controlling the virtual object to move from a real-time position to a target key point in the first way-finding map and move to a next key point in an effective range of the target key point according to the way-finding task according to the key point set and the real-time position of the virtual object in the first way-finding map until the way-finding task is completed; the target key points are key points in the key point set, wherein the distance between the target key points and the real-time position is smaller than a preset distance.
2. The method of claim 1, wherein the obtaining a set of keypoints of the virtual object in a first routing map according to a matching operation of the first partial map and the first grayscale image of the first partial map, comprises:
receiving a first instruction of a user;
responding to the first instruction, and controlling the virtual object to move according to a target path;
when the movement behavior of the virtual object accords with the preset movement behavior, matching a first local map and a first gray image of the first local map with the first road-finding map to obtain a first position of the virtual object in the first road-finding map;
setting the first position as a preset key point, and updating the preset key point to the key point set.
3. The method of claim 2, further comprising:
acquiring a plurality of game images of the virtual object when the virtual object is interacted in an interactive interface;
removing a target image from the plurality of game images, the target image comprising a virtual object in a game map and interference information within a first-person viewing angle region of the virtual object;
and obtaining a plurality of game map images according to the game images with the target images removed.
4. The method of claim 2 or 3, wherein prior to obtaining the set of keypoints, the method further comprises:
receiving a second instruction of the user;
in response to the second instruction, controlling the virtual object to move within a valid range of target positions;
acquiring at least one interactive sample in the effective range, wherein the interactive sample refers to an interactive material appearing in a visual angle range of a first-person visual angle of the virtual object;
generating a first gray image of the first local map according to the first local map where the target position is located and color information of each interactive sample;
matching the first local map with the first gray-scale image in the map drawing board to obtain a target matching position of the first local map in the map drawing board;
and according to the target matching position, covering a white area in the first gray level image on the map drawing board so as to update a spliced map in the map drawing board.
5. The method of claim 4, wherein generating a first grayscale image of the first local map according to the first local map where the target position is located and color information of each interactive sample comprises:
acquiring a first local map when the pitch angle of the virtual object is smaller than a preset angle according to the target position;
acquiring a second gray image corresponding to the first local map when the pitch angle is smaller than a preset angle;
acquiring a real-time pitch angle of the first-person visual angle;
and rotating the second gray level image according to the real-time pitch angle to obtain a first gray level image corresponding to the real-time pitch angle.
6. The method of claim 5, wherein after obtaining the first gray-scale image corresponding to the real-time pitch angle, the method further comprises:
and carrying out corrosion treatment on the first gray level image to obtain a corroded first gray level image.
7. The method of claim 6, wherein matching the first partial map with the first grayscale image in the map palette to obtain a target matching location comprises:
acquiring a historical matching position, wherein the historical matching position is a matching position of a second local map in the map drawing board when the virtual object is in the second local map;
drawing a target area in the map drawing board according to the historical matching position;
and matching the first local map and the first gray-scale image in the target area to obtain the target matching position.
8. The method of claim 7, wherein the historical matching location comprises a first coordinate of a first direction and a second coordinate of a second direction; the drawing a target area in the map drawing board according to the history matching position comprises:
moving the first coordinate a first distance in the first direction and the second coordinate a second distance in the second direction to form a candidate region;
obtaining a target width according to the first distance and the width of the second local map, and obtaining a target length according to the second distance and the length of the second local map;
and drawing a target area on the map drawing board by taking the candidate area as a central area, wherein the width of the target area is the target width, the length of the target area is the target length, and the target area is used for being matched with the first local map to obtain the current real-time position of the virtual object in the first routing map.
9. A virtual object movement control apparatus, characterized in that the virtual object movement control apparatus comprises:
the input and output module is used for acquiring a path searching task of a virtual object, and acquiring a key point set of the virtual object in a first path searching map according to the matching operation of the first path searching map by the first local map and a first gray image of the first local map; wherein the routing task comprises routing paths of the virtual objects in at least one local map; the first road-finding map is obtained by splicing each local map in the game application through a map drawing board; the first gray image is a gray image corresponding to a real-time pitch angle of the virtual object at a first person viewing angle; the key point set comprises a plurality of sequentially arranged key points, and the key points refer to positions of the virtual object, which accord with preset movement behaviors in the movement of a target path;
the input and output module is further used for acquiring the real-time position of the virtual object in the first routing map;
the processing module is used for controlling the virtual object to move from a real-time position to a target key point in the first way-finding map and move to a next key point in an effective range of the target key point according to the way-finding task according to the key point set and the real-time position of the virtual object in the first way-finding map until the way-finding task is completed; the target key point is a key point in the key point set, wherein the distance between the target key point and the real-time position is smaller than a preset distance.
10. A virtual object movement control apparatus, characterized by comprising:
at least one processor, memory, and transceiver;
wherein the memory is for storing a computer program and the processor is for invoking the computer program stored in the memory to perform the method of any of claims 1-8.
11. A computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-8.
CN202011071517.1A 2020-10-09 2020-10-09 Method, device and storage medium for controlling movement of virtual object Active CN112221143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011071517.1A CN112221143B (en) 2020-10-09 2020-10-09 Method, device and storage medium for controlling movement of virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011071517.1A CN112221143B (en) 2020-10-09 2020-10-09 Method, device and storage medium for controlling movement of virtual object

Publications (2)

Publication Number Publication Date
CN112221143A CN112221143A (en) 2021-01-15
CN112221143B true CN112221143B (en) 2022-07-15

Family

ID=74120074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011071517.1A Active CN112221143B (en) 2020-10-09 2020-10-09 Method, device and storage medium for controlling movement of virtual object

Country Status (1)

Country Link
CN (1) CN112221143B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082713B (en) * 2021-03-01 2022-12-09 上海硬通网络科技有限公司 Game control method and device and electronic equipment
CN113209622A (en) * 2021-05-28 2021-08-06 北京字节跳动网络技术有限公司 Action determination method and device, readable medium and electronic equipment
CN113546419B (en) * 2021-07-30 2024-04-30 网易(杭州)网络有限公司 Game map display method, game map display device, terminal and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003000940A (en) * 2001-06-20 2003-01-07 Enix Corp Video game device, recording medium and program
CN106267822A (en) * 2016-08-18 2017-01-04 网易(杭州)网络有限公司 The method of testing of game performance and device
CN106422330B (en) * 2016-10-14 2019-10-29 网易(杭州)网络有限公司 The method for searching and device of unit
CN109999498A (en) * 2019-05-16 2019-07-12 网易(杭州)网络有限公司 A kind of method for searching and device of virtual objects
CN110302537B (en) * 2019-07-10 2023-12-19 深圳市腾讯网域计算机网络有限公司 Virtual object control method, device, storage medium and computer equipment
CN110465089B (en) * 2019-07-29 2021-10-22 腾讯科技(深圳)有限公司 Map exploration method, map exploration device, map exploration medium and electronic equipment based on image recognition
CN110755848B (en) * 2019-11-06 2023-09-15 网易(杭州)网络有限公司 Path finding method in game, terminal and readable storage medium
CN111111187B (en) * 2019-11-28 2023-07-14 玩心(北京)网络科技有限公司 Online game path finding method and device based on grid

Also Published As

Publication number Publication date
CN112221143A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112221143B (en) Method, device and storage medium for controlling movement of virtual object
CN110457414B (en) Offline map processing and virtual object display method, device, medium and equipment
CN106250938B (en) Target tracking method, augmented reality method and device thereof
CN111530073B (en) Game map display control method, storage medium and electronic device
CN111744187B (en) Game data processing method and device, computer and readable storage medium
CN105046213A (en) Method for augmenting reality
Albarelli et al. Fast and accurate surface alignment through an isometry-enforcing game
CN111754541A (en) Target tracking method, device, equipment and readable storage medium
CN1380996A (en) Apparatus and method for indicating target by image processing without three-dimensional modeling
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN112699832B (en) Target detection method, device, equipment and storage medium
CN112057858B (en) Virtual object control method, device, equipment and storage medium
KR102396390B1 (en) Method and terminal unit for providing 3d assembling puzzle based on augmented reality
Kasapakis et al. Occlusion handling in outdoors augmented reality games
CN110807379A (en) Semantic recognition method and device and computer storage medium
CN110465089A (en) Map heuristic approach, device, medium and electronic equipment based on image recognition
CN111744197B (en) Data processing method, device and equipment and readable storage medium
CN112150464B (en) Image detection method and device, electronic equipment and storage medium
Jacob et al. A non-intrusive approach for 2d platform game design analysis based on provenance data extracted from game streaming
CN111739134B (en) Model processing method and device for virtual character and readable storage medium
CN113723164A (en) Method, device and equipment for acquiring edge difference information and storage medium
US20230142566A1 (en) System and method for precise positioning with touchscreen gestures
CN109816791B (en) Method and apparatus for generating information
CN115965736A (en) Image processing method, device, equipment and storage medium
CN115187497A (en) Smoking detection method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037808

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant