CN113018862B - Virtual object control method and device, electronic equipment and storage medium - Google Patents

Virtual object control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113018862B
CN113018862B CN202110441617.7A CN202110441617A CN113018862B CN 113018862 B CN113018862 B CN 113018862B CN 202110441617 A CN202110441617 A CN 202110441617A CN 113018862 B CN113018862 B CN 113018862B
Authority
CN
China
Prior art keywords
virtual object
skill
virtual
exchange
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110441617.7A
Other languages
Chinese (zh)
Other versions
CN113018862A (en
Inventor
叶梓涛
杨霁初
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110441617.7A priority Critical patent/CN113018862B/en
Publication of CN113018862A publication Critical patent/CN113018862A/en
Application granted granted Critical
Publication of CN113018862B publication Critical patent/CN113018862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application provides a control method and device of a virtual object, electronic equipment and a computer readable storage medium; the method comprises the following steps: displaying a virtual scene in a human-computer interaction interface, wherein the virtual scene comprises a first virtual object positioned at a first position and a second virtual object positioned at a second position; exchanging the positions of the first virtual object and the second virtual object in the virtual scene in response to a trigger operation of a position exchange skill controlling the release of the first virtual object; wherein the first virtual object is located at the second position, and the second virtual object is located at the first position. By the method and the device, the virtual object can move in the virtual scene in an efficient and resource intensive mode.

Description

Virtual object control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer human-computer interaction technologies, and in particular, to a method and apparatus for controlling a virtual object, an electronic device, and a computer readable storage medium.
Background
The man-machine interaction technology of the virtual scene based on the graphic processing hardware can realize diversified interactions among virtual objects controlled by users or artificial intelligence according to actual application requirements, and has wide practical value. For example, in a virtual scene such as a game, a real fight process between virtual objects can be simulated.
When it is required to control the virtual object to move from the current location to another location in the virtual scene, the related technology is generally implemented by controlling the virtual object to move continuously, for example, controlling the virtual object to move from the current location to another location in a jumping or punching manner, which results in a complex interaction process, additionally consumes computing resources of the computer device, and also affects the user experience.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, electronic equipment and a computer readable storage medium of a virtual object, which can realize that the virtual object moves in a virtual scene in an efficient and resource intensive mode.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a control method of a virtual object, which comprises the following steps:
displaying a virtual scene in a human-computer interaction interface, wherein the virtual scene comprises a first virtual object positioned at a first position and a second virtual object positioned at a second position;
exchanging the positions of the first virtual object and the second virtual object in the virtual scene in response to a trigger operation of a position exchange skill controlling the release of the first virtual object;
Wherein the first virtual object is located at the second position, and the second virtual object is located at the first position.
The embodiment of the application provides a control device for a virtual object, which comprises:
the display module is used for displaying a virtual scene in the human-computer interaction interface, wherein the virtual scene comprises a first virtual object positioned at a first position and a second virtual object positioned at a second position;
the switching module is used for switching the positions of the first virtual object and the second virtual object in the virtual scene in response to the triggering operation of the position switching skill for controlling the release of the first virtual object;
wherein the first virtual object is located at the second position, and the second virtual object is located at the first position.
In the above solution, the device further includes an obtaining module, configured to obtain a skill release condition corresponding to a position exchange skill of the first virtual object; the apparatus further includes a detection module to detect a location exchange skill of the first virtual object based on the skill release condition.
In the above scheme, the detection module is further configured to obtain an action range of a position exchange skill of the first virtual object; determining a trigger operation to be responsive to a location exchange skill controlling the release of the first virtual object when the second virtual object is within the scope of action; when the second virtual object is located outside the action range, displaying first prompt information, wherein the first prompt is used for prompting that the position exchange skills cannot be released and prompting how to move into the action range.
In the above scheme, the obtaining module is further configured to obtain a state parameter of the first virtual object; the device also comprises a determining module for determining the action range of the position exchange skill of the first virtual object based on the state parameter; wherein the status parameter includes at least one of: the first virtual object comprises a grade of the first virtual object, an activity level of the first virtual object and a life value of the first virtual object.
In the above solution, the detection module is further configured to obtain a skill waiting time of a position exchange skill of the first virtual object; when the interval between the first time and the second time is smaller than the skill waiting time, displaying second prompting information, wherein the second prompting information is used for prompting that the position exchange skill cannot be released and prompting the waiting time; determining a trigger operation to be responsive to a position exchange skill controlling release of the first virtual object when the interval between the first time and the second time is greater than or equal to the skill waiting time; wherein the first time is a time when the first virtual object was last controlled to release the location exchange skill, and the second time is a time when the trigger operation is received.
In the above scheme, the detection module is further configured to obtain an energy value of the first virtual object; determining a trigger operation to be responsive to a location exchange skill controlling the release of the first virtual object when an energy value of the first virtual object is greater than an energy value required to release the location exchange skill; and when the first virtual object does not have enough energy value currently, displaying third prompt information, wherein the third prompt information is used for prompting that the position exchange skills cannot be released and prompting that the energy value needs to be accumulated.
In the above aspect, the detection module is further configured to detect an obstacle based on a ray between the first location and the second location; when detecting that an obstacle exists between the first virtual object and the second virtual object, displaying fourth prompting information, wherein the fourth prompting information is used for prompting that the position exchange skill cannot be released due to the obstacle; when no obstacle is detected between the first virtual object and the second virtual object, a trigger operation is determined that will be responsive to a position exchange skill controlling the release of the first virtual object.
In the above solution, the detection module is further configured to obtain a physical rule that needs to be met when exchanging the positions of the first virtual object and the second virtual object; wherein the physical rule includes at least one of: the first virtual object and the second virtual object are positioned in a position where enough space exists to accommodate each other; the path of the switching position can support the parallel passing of the first virtual object and the second virtual object; determining a trigger operation to be responsive to a location exchange skill controlling release of the first virtual object when the physical rule is met; and when the physical rule is not met, displaying fifth prompt information, wherein the fifth prompt information is used for prompting that the position exchange skill cannot be released due to the fact that the physical rule is not met and prompting that the position exchange skill is moved to a position meeting the physical rule.
In the above scheme, the display module is further configured to display a locking identifier corresponding to the second virtual object; wherein the lock identifier is used to characterize that the first virtual object is capable of exchanging position with the second virtual object.
In the above solution, the exchange module is further configured to move the first virtual object from the first position to the second position in the virtual scene according to a preset speed; and the second position of the second virtual object in the virtual scene is directly updated to the first position in the virtual scene, or the second virtual object is moved from the second position to the first position in the virtual scene according to the preset speed.
In the above scheme, the switching module is further configured to control the first virtual object to travel at a preset speed on a switching path from the first position to the second position, and automatically avoid an obstacle and an organ prop existing on the switching path.
In the above solution, the device further includes a hiding module, configured to hide an energy value of the first virtual object, so as to mask a triggering operation that controls the first virtual object to repeatedly release the position exchange skill in response to a traveling process.
In the above scheme, the device further comprises a closing module, configured to close the crash box corresponding to the first virtual object, and place the response of the first virtual object to the external operation in a locked state; the device also comprises an opening module, which is used for opening the collision box corresponding to the first virtual object and releasing the locking state set for the first virtual object.
In the above scheme, the display module is further configured to display a first virtual object located at a first position in a virtual scene; and responding to the triggering operation of the summoning skill released by the first virtual object, and displaying a summoning second virtual object at a second position of the virtual scene.
In the above scheme, the display module is further configured to display a plurality of third virtual objects in the virtual scene in a preset manner; the determining module is further configured to respond to a virtual object selection operation, and take a selected third virtual object of the plurality of third virtual objects as the second virtual object.
In the above solution, the display module is further configured to display, in response to controlling a crossing operation of the first virtual object, a process of the first virtual object crossing an obstacle by means of the second virtual object; wherein the height of the obstacle exceeds the jump height that the first virtual object can achieve without the aid of the second virtual object.
In the above scheme, the obtaining module is further configured to obtain feature data of the first virtual object; the apparatus further includes a calling module configured to call the machine learning model based on the feature data, the first location, and the second location, to obtain probabilities of a corresponding plurality of candidate skills, the plurality of candidate skills including the location exchange skill; the display module is further used for displaying sixth prompt information when the maximum probability corresponds to the position exchange skill, wherein the sixth prompt information is used for prompting the release of the position exchange skill; wherein the characteristic data includes at least one of: scope of action, skill waiting time, energy value.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual object when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores executable instructions for causing a processor to execute, so as to implement the control method of the virtual object provided by the embodiment of the application.
The embodiment of the application provides a computer program product, which comprises computer executable instructions for realizing the control method of the virtual object provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
by controlling the first virtual object to release the position exchange skill, the position exchange of the first virtual object and the second virtual object in the virtual scene is realized, so that the first virtual object can be directly moved from the first position in the virtual scene to the second position in the virtual scene, the interaction process is simplified, the consumption of computing resources is further reduced, and meanwhile, the use experience of a user is also improved.
Drawings
Fig. 1 is an application mode schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 2 is an application mode schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application;
fig. 4 is a flow chart of a control method of a virtual object according to an embodiment of the present application;
fig. 5A is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
Fig. 5B is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 5C is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 5D is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 5E is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 5F is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 6 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application;
fig. 7A is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 7B is an application scenario schematic diagram of a control method of a virtual object provided in an embodiment of the present application;
fig. 8 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", and the like are merely used to distinguish similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", and the like may be interchanged with one another, if permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) And the client, the application program used for providing various services, such as a video playing client, a game client and the like, running in the terminal equipment.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) The virtual scene is a virtual scene that an application program displays (or provides) when running on the terminal device. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
4) Virtual objects, images of various people and objects in a virtual scene that can interact, or movable objects in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., for example: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
For example, the virtual object may be a user Character controlled by an operation on the client, an artificial intelligence (AI, artificial Intelligence) set in the virtual scene fight by training, or a Non-user Character (NPC) set in the virtual scene interaction. For example, the virtual object may be a virtual character that performs an antagonistic interaction in a virtual scene. For another example, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients joining the interaction.
5) Scene data representing various characteristics that virtual objects in a virtual scene are represented during interactions may include, for example, the location of the virtual objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time that various functions configured in the virtual scene need to wait (depending on the number of times the same function can be used in a specific time), and attribute values that may represent various states of a game character, including, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
When it is required to control the virtual object to move from the current location to another location in the virtual scene, the related technology is generally implemented by controlling the virtual object to move continuously, for example, controlling the virtual object to move from the current location to another location in a jumping or punching manner, which results in a complex interaction process, additionally consumes computing resources of the computer device, and also affects the use experience of the user.
In view of the above technical problems, embodiments of the present application provide a method, an apparatus, an electronic device, and a computer-readable storage medium for controlling a virtual object, which can implement movement of the virtual object in a virtual scene in an efficient and resource intensive manner. In order to facilitate easier understanding of the control method of the virtual object provided by the embodiment of the present application, first, an exemplary implementation scenario of the control method of the virtual object provided by the embodiment of the present application is described, where the virtual scenario in the control method of the virtual object provided by the embodiment of the present application may be based on output of a terminal device completely or on cooperative output of the terminal device and a server.
In other embodiments, the virtual scene may also be an environment for interaction of game characters, for example, the game characters may fight in the virtual scene, and both parties may interact in the virtual scene by controlling actions of the game characters, so that a user can relax life pressure in the game process.
In an implementation scenario, referring to fig. 1, fig. 1 is a schematic application mode diagram of a control method of a virtual object provided in an embodiment of the present application, which is suitable for some application modes that can complete relevant data computation of a virtual scenario 100 completely depending on the computing capability of graphics processing hardware of a terminal device 400, for example, a game in a stand-alone/offline mode, and output of the virtual scenario is completed through the terminal device 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device.
By way of example, the types of graphics processing hardware include central processing units (CPU, central Processing Unit) and graphics processors (GPU, graphics Processing Unit).
When forming the visual perception of the virtual scene 100, the terminal device 400 calculates the data required for display through the graphic computing hardware, and completes loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception for the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is presented on the display screen of the smart phone, or a video frame realizing the three-dimensional display effect is projected on the lens of the augmented reality/virtual reality glasses; in addition, to enrich the perceived effect, the terminal device 400 may also form one or more of auditory perception, tactile perception, motion perception and gustatory perception by means of different hardware.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and outputs a virtual scene including role playing during the running of the client 410, where the virtual scene may be an environment for interaction of a game character, such as a plain, a street, a valley, etc. for the game character to fight against; the virtual scene includes a first virtual object 110 located at a first location and a second virtual object 120 located at a second location. Wherein the first virtual object 110 may be a game character under the control of a user (or player), i.e. the first virtual object 110 is controlled by a real user, will move in the virtual scene 100 in response to the real user's operation of a controller (including a touch screen, voice operated switches, keyboard, mouse, joystick, etc.), for example, when the real user moves the joystick to the left, the first virtual object 110 will move to the left in the virtual scene 100, and may also remain stationary, jump in place, and use various functions (e.g. skills and props).
For example, when there is an obstacle 130 (e.g., a pit) between the first virtual object 110 (e.g., a frog) at the first location and the second virtual object 120 (e.g., a stone) at the second location, the client 410 controls the first virtual object 110 to release the location exchange skill in response to the release operation of the location exchange skill triggered by the user for the first virtual object 110, so as to exchange the locations of the first virtual object 110 and the second virtual object 120 in the virtual scene 100, after the location exchange is completed, the first virtual object 110 appears at the location of the second virtual object 120 before the exchange, and the second virtual object 120 appears at the location of the first virtual object 120 before the exchange, so that the first virtual object 110 can cross the obstacle 130 in one step by the location exchange skill, thereby simplifying the operation process of the user, improving the user experience, and further saving the computing resources of the terminal device 400.
In another implementation scenario, referring to fig. 2, fig. 2 is a schematic application mode diagram of a control method of a virtual object provided in an embodiment of the present application, applied to a terminal device 400 and a server 200, and adapted to complete virtual scene calculation depending on a computing capability of the server 200, and output an application mode of a virtual scene at the terminal device 400.
Taking the example of forming the visual perception of the virtual scene 100, the server 200 performs calculation of virtual scene related display data (such as scene data) and sends the calculated display data to the terminal device 400 through the network 300, the terminal device 400 finishes loading, analyzing and rendering the calculated display data depending on the graphic calculation hardware, and outputs the virtual scene depending on the graphic output hardware to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame for realizing a three-dimensional display effect can be projected on a lens of an augmented reality/virtual reality glasses; as regards the perception of the form of the virtual scene, it is understood that the auditory perception may be formed by means of the corresponding hardware output of the terminal device 400, for example using a microphone, the tactile perception may be formed using a vibrator, etc.
As an example, the terminal device 400 has a client 410 (e.g., a web-version game application) running thereon, and performs game interaction with other users through the connection server 200 (e.g., a game server), the terminal device 400 outputs a virtual scene 100 of the client 410, including a first virtual object 110 located at a first location and a second virtual object 120 located at a second location in the virtual scene 100. Wherein the first virtual object 110 may be a game character under control of a user, i.e. the first virtual object 110 is controlled by a real user, will move in the virtual scene 100 in response to operation of the real user with respect to a controller (e.g. touch screen, voice controlled switch, keyboard, mouse, joystick, etc.), for example when the real user moves the joystick to the right, the first virtual object 110 will move to the right in the virtual scene 100, and may also remain stationary, jump in place and use various functions (e.g. skills and props).
For example, when there is an obstacle 130 (e.g., a pit) between the first virtual object 110 (e.g., a frog) at the first location and the second virtual object 120 (e.g., a stone) at the second location, the client 410 controls the first virtual object 110 to release the location exchange skill in response to the release operation of the location exchange skill triggered by the user for the first virtual object 110, so as to exchange the locations of the first virtual object 110 and the second virtual object 120 in the virtual scene 100, after the location exchange is completed, the first virtual object 110 appears at the location of the second virtual object 120 before the exchange, and the second virtual object 120 appears at the location of the first virtual object 120 before the exchange, so that the first virtual object 110 can cross the obstacle 130 in one step by the location exchange skill, thereby simplifying the operation process of the user, improving the user experience, and further saving the computing resources of the server 200.
In some embodiments, the terminal device 400 may implement the control method of the virtual object provided in the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) application (APP, APPlication), i.e., a program that needs to be installed in an operating system to run, such as a game APP (i.e., client 410 described above); the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an example of an application program, in actual implementation, the terminal device 400 installs and runs an application program supporting a virtual scene. The application may be any one of a massively multiplayer online role playing game (MMORPG, massive Multiplayer OnlineRole-PlayingGame), a First person shooter game (FPS, first-Person Shooting game), a third person shooter game, a multiplayer online tactical competition game (MOBA, multiplayer Online Battle Arena games), a virtual reality application, a three-dimensional map program, or a multiplayer warfare class survival game. The user uses the terminal device 400 to operate a virtual object located in a virtual scene to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the virtual object may be a virtual character, such as an emulated persona or a cartoon persona.
In other embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
For example, the server 200 in fig. 2 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
The structure of the terminal apparatus 400 in fig. 1 is explained below. Referring to fig. 3, fig. 3 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application, and the terminal device 400 shown in fig. 3 includes: at least one processor 420, a memory 460, at least one network interface 430, and a user interface 440. The various components in terminal device 400 are coupled together by bus system 450. It is understood that bus system 450 is used to implement the connected communications between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 450 in fig. 3.
The processor 420 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 440 includes one or more output devices 441 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 440 also includes one or more input devices 442, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 460 optionally includes one or more storage devices physically remote from processor 420.
Memory 460 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 460 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 460 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, the exemplary network interfaces 430 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 463 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 441 (e.g., a display screen, speakers, etc.) associated with the user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the control device for a virtual object provided in the embodiments of the present application may be implemented in a software manner, and fig. 3 shows the control device 465 for a virtual object stored in the memory 460, which may be software in the form of a program and a plug-in, and includes the following software modules: the display module 4651, the exchange module 4652, the acquisition module 4653, the detection module 4654, the determination module 4655, the concealment module 4656, the closing module 4657, the opening module 4658 and the invoking module 4659 are logical, so that any combination or further splitting may be performed according to the implemented functions. It should be noted that, in fig. 3, all the above modules are shown at once for convenience of expression, but should not be regarded as excluding the implementation that may include only the display module 4651 and the switching module 4652 in the control device 465 of the virtual object, the functions of each module will be described below.
In other embodiments, the control device for a virtual object provided in the embodiments of the present application may be implemented in hardware, and as an example, the control device for a virtual object provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the control method for a virtual object provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic components.
The method for controlling the virtual object provided in the embodiment of the present application will be specifically described below with reference to the accompanying drawings. The control method of the virtual object provided in the embodiment of the present application may be executed by the terminal device 400 in fig. 1 alone, or may be executed by the terminal device 400 and the server 200 in fig. 2 in cooperation.
Next, a control method of the virtual object provided in the embodiment of the present application is described by taking a terminal device 400 in fig. 1 as an example. Referring to fig. 4, fig. 4 is a flowchart of a control method of a virtual object according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
It should be noted that the method shown in fig. 4 may be executed by various computer programs running on the terminal device 400, and is not limited to the above-described client 410, but may also be the operating system 461, software modules and scripts described above, and therefore the client should not be considered as limiting the embodiments of the present application.
In step S101, a virtual scene is displayed in the human-computer interaction interface, wherein the virtual scene includes a first virtual object located at a first position and a second virtual object located at a second position.
In some embodiments, a virtual scene is displayed on a man-machine interaction interface of the terminal device, and a first virtual object (for example, a game character controlled by a real user) located at a first position and a second virtual object (for example, a game character controlled by other players, a game character controlled by a robot program, or stones, animals, plants, etc. displayed in the game scene) located at a second position are displayed on a screen of the virtual scene.
In other embodiments, the virtual scene may be displayed at a first person perspective (e.g., playing a first virtual object in a game at a player's own perspective) in a human-machine interface; the virtual scene may be displayed at a third viewing angle (for example, the player follows the first virtual object in the game to play the game); the virtual scene can be displayed in a bird's eye view with a large viewing angle; wherein, the different visual angles can be switched arbitrarily.
As an example, the first virtual object may be an object controlled by a user in the game, although other virtual objects may also be included in the virtual scene, e.g. controlled by other users or by a robot program. The first virtual object may be partitioned into any one of a plurality of teams, the teams may be hostile or collaborative, and the teams in the virtual scenario may include one or all of the above.
Taking the example of displaying the virtual scene from the first person perspective, the virtual scene displayed in the human-computer interaction interface may include: the field of view area of the first virtual object is determined according to the viewing position and the field angle of the first virtual object in the complete virtual scene, and a part of the virtual scene in the field of view area in the complete virtual scene is presented, namely the displayed virtual scene can be a part of the virtual scene relative to the panoramic virtual scene. Because the first person perspective is the viewing perspective that is most capable of giving the user impact, immersive perception of the user as being immersive during operation can be achieved.
Taking an example of displaying a virtual scene with a bird's eye view and a large viewing angle, the virtual scene displayed in the human-computer interaction interface may include: in response to a zoom operation for the panoramic virtual scene, a portion of the virtual scene corresponding to the zoom operation is presented in the human-machine interaction interface, i.e., the displayed virtual scene may be a portion of the virtual scene relative to the panoramic virtual scene. Therefore, the operability of the user in the operation process can be improved, and the efficiency of man-machine interaction can be improved.
In some embodiments, the second virtual object may also be a virtual object that is summoned in the virtual scene (e.g., a virtual pet, or other virtual item, such as a stone, etc., that appears in a particular location in the virtual scene that controls the first virtual object to release the summoning skill) by the terminal device in response to a trigger operation that controls the summoning skill released by the first virtual object (i.e., the second virtual object was not originally present in the virtual scene).
When the first virtual object controlled by the real user has a summoning skill, the terminal device responds to a triggering operation of the summoning skill for controlling the release of the first virtual object (for example, the user can trigger a corresponding summoning control to realize the summoning skill, the summoning control can be a key, an icon and the like, the triggering mode for the summoning control can be at least one of clicking, double clicking, long pressing and sliding, a summoning instruction can be generated by clicking a key "Q" on a keyboard, a summoning instruction can be generated by clicking a summoning icon on a screen through a mouse, and the like, in addition, the summoning instruction can be generated by identifying a voice instruction or limb action of the user, for example, the user can generate a summoning instruction through speaking a "summoning" through a pet), the summoning virtual object at a specific position of the virtual scene can be displayed, then the summoning virtual object at the specific position in the virtual scene can be used as a second virtual object (namely, a virtual object at a subsequent virtual object at the specific position in the virtual scene is used for exchanging with the first virtual object, the summoning instruction can be generated through clicking a summoning icon on a mouse, the summoning icon can be generated through clicking a summoning icon on the screen, and the summoning icon can be released by the user can be at the specific position when the user is in the specific position of the control device is at the specific position of the user, for example, the summoning skill is in the user is in the specific position. Therefore, the method of exchanging the positions with the summoned virtual objects further increases the interest of the game and greatly improves the game experience of the user.
It should be noted that, in practical application, the called virtual object may be randomly appearing at any position in the virtual scene, for example, the called virtual object may randomly appear within a range of 1000 codes from the first virtual object (a distance unit in the game scene, for example, a game character may be moved by 3 steps as 10 codes), and of course, the called virtual object may also appear at a fixed position in the virtual scene, for example, the called virtual object always appears at a position 500 codes in front of the first virtual object (i.e., the orientation of the first virtual object), which is not limited in this embodiment of the present application.
In other embodiments, there may be a plurality of virtual objects in the virtual scene for the user to select, that is, the user may select any one virtual object from the plurality of virtual objects for location exchange, and before responding to the triggering operation of the location exchange skill controlling the release of the first virtual object, the terminal device may further perform the following operations: displaying a plurality of third virtual objects in a preset mode in the virtual scene; in response to the virtual object selection operation, a selected third virtual object of the plurality of third virtual objects is taken as a second virtual object (i.e., a virtual object that is to be position-exchanged with the first virtual object).
Taking the first virtual object as a virtual object a as an example, in addition to the virtual object a, a virtual object B, a virtual object C and a virtual object D exist in the virtual scene, and when the user selects the virtual object C, the virtual object C is taken as a second virtual object, that is, positions of the virtual object a and the virtual object C in the virtual scene are exchanged subsequently. Of course, the second virtual object may also be automatically selected, for example, a virtual object closest to the virtual object a in the virtual scene may be automatically selected as the second virtual object.
In step S102, the positions of the first virtual object and the second virtual object in the virtual scene are exchanged in response to a trigger operation of the position exchange skill controlling the release of the first virtual object.
In some embodiments, the terminal device may further perform the following operations, prior to the triggering operation in response to the location exchange skills controlling the release of the first virtual object: acquiring skill release conditions corresponding to the position exchange skills of the first virtual object; the position exchange skills of the first virtual object are detected based on the skill release conditions. That is, the terminal device may also first detect whether the current opportunity satisfies the skill release condition of the location exchange skill, i.e., first need to detect whether the first virtual object is currently able to release the location exchange skill, before controlling the first virtual object to release the location exchange skill in response to the trigger operation.
In practical application, different skill release conditions can be set for different first virtual objects in the virtual scene, so that personalized control effect is realized; of course, the same skill release condition can be set uniformly for different first virtual objects in the virtual scene, so as to reduce the consumption of computing resources of the terminal equipment.
In other embodiments, where the above embodiments are received, the skill release condition of the location exchange skill may be related to an action range of the location exchange skill, and the terminal device may implement the above detection of the location exchange skill of the first virtual object based on the skill release condition by: acquiring an action range of a position exchange skill of a first virtual object; determining a trigger operation to be responsive to a location exchange skill controlling release of the first virtual object when the second virtual object is within the scope of action; when the second virtual object is located outside the action range, displaying first prompt information, wherein the first prompt information is used for prompting that the position exchange skills cannot be released and prompting how to move into the action range.
For example, the location exchange skill of the first virtual object is in an active scope (i.e., the first virtual object can only exchange locations with virtual objects within the active scope of the location exchange skill, but cannot exchange locations with virtual objects outside the active scope of the location exchange skill), for example, when the active scope (also referred to as a law scope) of the location exchange skill of the first virtual object is 1000 codes (for example, the active scope may be a circle with a radius of 1000 codes around the location of the first virtual object, that is, the first virtual object can exchange locations with any other virtual object within the circle; of course, the active scope may also be a location related to the orientation of the first virtual object, that is, the first virtual object can only exchange locations with other virtual objects within a certain angle range corresponding to the orientation, for example, the active scope may be a sector with a radius of 1000 codes around the location of the first virtual object, and an angle consistent with the orientation of the first virtual object of 120 °), then the first virtual object can only exchange locations with virtual objects within the 1000 codes around the virtual object, and cannot exchange locations with other virtual objects outside the virtual object. Wherein the scope of action of the location exchange skills may be determined by: acquiring state parameters of a first virtual object; determining an action range of the position exchange skill of the first virtual object based on the acquired state parameters; wherein the status parameter includes at least one of: the level of the first virtual object, the activity level of the first virtual object (e.g., online time duration, frequency of participation in the activity, etc.), the life value of the first virtual object.
For example, the scope of action of the location exchange skill of the first virtual object may be positively correlated with the level of the first virtual object, i.e., the higher the level of the first virtual object, the greater the scope of action of the corresponding location exchange skill; the action range of the position exchange skill of the first virtual object may also be positively related to the activity level of the first virtual object, that is, the more active the first virtual object is, the greater the action range of the corresponding position exchange skill is; the action range of the position exchange skill of the first virtual object can also be positively related to the life value of the first virtual object, namely the higher the life value of the first virtual object is, the larger the action range of the corresponding position exchange skill is; of course, the scope of action of the location exchange skill of the first virtual object may be determined comprehensively according to any two or three of the three state parameters, which is not limited in the embodiment of the present application.
According to the method and the device for the interaction of the virtual object, the grade, the activity degree and the life value of the first virtual object are associated with the action range, so that the interaction of the user in the virtual scene can be positively promoted, and the use experience of the user in the virtual scene is improved.
For example, referring to fig. 5A, fig. 5A is a schematic application scenario of the virtual object control method provided in the embodiment of the present application, as shown in fig. 5A, when a user wants to control a first virtual object 510 located at a first position in a virtual scene 500 to release a position exchange skill to exchange positions with a second virtual object 520 located at a second position in the virtual scene 500, the user may first view the scope of the current position exchange skill of the first virtual object 510, for example, when the user clicks a key "T" on a keyboard, the scope of the position exchange skill 530 of the first virtual object 510 will be displayed in the virtual scene 500, and when the second virtual object 520 is located outside the scope of the position exchange skill 530 of the first virtual object 510, the first prompt 540 may be displayed in a popup window manner, for example, the first prompt 540 may be "the target is outside the scope of the action," please continue to release the position exchange skill after going forward ".
In other embodiments, the skill release condition of the location exchange skill may be related to a skill waiting time of the location exchange skill, and the terminal device may implement the above-mentioned detecting the location exchange skill of the first virtual object based on the skill release condition by: acquiring skill waiting time of the position exchange skill of the first virtual object; when the interval between the first time and the second time is smaller than the skill waiting time, displaying second prompting information, wherein the second prompting information is used for prompting that the position exchange skill cannot be released and prompting the waiting time; determining a trigger operation that will exchange skills in response to a location controlling the release of the first virtual object when the interval between the first time and the second time is greater than or equal to the skill waiting time; wherein the first time is the time of last controlling the first virtual object to release the position exchange skill, and the second time is the time of receiving the trigger operation.
For example, the position exchange skill of the first virtual object is a skill waiting time (also referred to as cooling time, which refers to the waiting time required to continuously use the same skill, abbreviated as CD), i.e. the position exchange skill of the first virtual object cannot be controlled to be continuously released. The skill waiting time of the location exchange skill of the first virtual object may be related to a state parameter of the first virtual object (e.g., a level of the first virtual object, a life value of the first virtual object, etc.), for example, the higher the level of the first virtual object, the shorter the corresponding skill waiting time (e.g., when the level of the first virtual object is 5 levels, the skill waiting time of the corresponding location exchange skill is 20 seconds, and when the level of the first virtual object is 10 levels, the skill waiting time of the corresponding location exchange skill is 10 seconds).
It should be noted that, in practical application, when the second virtual object is located within the scope of the position exchange skill of the first virtual object and the time for receiving the trigger operation has exceeded the skill waiting time of the first virtual object (for example, assuming that the skill waiting time of the position exchange skill of the first virtual object is 20 seconds, the terminal device receives the trigger operation again after 30 seconds), the terminal device determines that the positions of the first virtual object and the second virtual object in the virtual scene will be exchanged in response to the position exchange skill for controlling the release of the first virtual object. That is, the skill release condition of the location exchange skill needs to satisfy both the scope of action and the skill waiting time.
For example, referring to fig. 5B, fig. 5B is a schematic application scenario of the virtual object control method provided in the embodiment of the present application, as shown in fig. 5B, when a terminal device receives an instruction triggered by a user to control a first virtual object 510 located at a first position in a virtual scene 500 to release a position exchange skill, it first detects whether a second virtual object 520 is within an action range 530 of the position exchange skill of the first virtual object 510, when it is detected that the second virtual object 520 is within the action range 530 of the position exchange skill of the first virtual object 510, it further determines whether the position exchange skill of the first virtual object 510 is within a skill waiting time, and when the position exchange skill is within the skill waiting time, a second prompt message 550 may be displayed in a popup window, for example, the second prompt message 550 may be "in skill cooling, and the position exchange skill is released again after waiting for 10 seconds.
In some embodiments, the skill release condition of the location exchange skill may also relate to an energy value that the first virtual object currently has, and the terminal device may implement the above-described detecting the location exchange skill of the first virtual object based on the skill release condition by: acquiring an energy value of a first virtual object; determining a trigger operation to be responsive to the location exchange skill controlling the release of the first virtual object when the first virtual object currently has an energy value greater than an energy value required to be consumed by the released location exchange skill; when the first virtual object does not currently have enough energy value (namely, the energy value currently possessed by the first virtual object is smaller than the energy value required to be consumed by releasing the position exchange skill), third prompt information is displayed, and the third prompt information is used for prompting that the position exchange skill cannot be released and prompting that the accumulated energy value is required.
For example, the first virtual object may consume a predetermined amount of energy (e.g., a anger value, a magic value, etc.) when releasing the location exchange skills, i.e., may not be able to release the location exchange skills when the first virtual object does not currently have sufficient energy. Wherein the amount of energy value that the first virtual object needs to consume per release of the place-swapping skills may be related to the level of the first virtual object, e.g., the higher the level of the first virtual object, the less the amount of energy value that the corresponding consumes (e.g., when the level of the first virtual object is level 5, releasing the place-swapping skills needs to consume an energy value of 100; when the level of the first virtual object is level 10, releasing the place-swapping skills needs to consume an energy value of 50); the amount of energy required to free the location exchange skill may also be related to the exchange distance (the exchange distance is less than or equal to the range, for example, when the working range is 1000 yards, the maximum value of the exchange distance is 1000 yards, i.e., the first virtual object may exchange positions with other virtual objects 500 yards away from itself or with other virtual objects 800 yards away from itself, the amount of energy required to be consumed by both may be different), for example, the greater the exchange distance, the greater the amount of energy required to be consumed (for example, when the first virtual object is in position exchange with the second virtual object 1000 yards away from itself, an energy of 100 is required to be consumed; when the first virtual object is in position exchange with the second virtual object 500 yards away from itself, an energy of 50 is required to be consumed).
It should be noted that, in practical applications, the amount of energy required to be consumed by the first virtual object for each time of releasing the location exchange skill may be fixed, that is, the amount of energy required to be consumed by the location exchange skill is independent of the level or the exchange range of the first virtual object, for example, the energy required to be consumed by the first virtual object 100 for each time of releasing the location exchange skill, which is not limited in the embodiments of the present application.
Furthermore, it should be noted that the skill release condition of the location exchange skill may be related to the scope of action of the location exchange skill, the skill waiting time, and the energy value of the first virtual object at the same time, i.e. whenever the second virtual object is within the scope of action of the location exchange skill of the first virtual object and the time of receiving the trigger operation has exceeded the skill waiting time, and the first virtual object currently has sufficient energy value, the terminal device determines that the locations of the first virtual object and the second virtual object in the virtual scene are to be exchanged in response to the location exchange skill controlling the release of the first virtual object.
For example, referring to fig. 5C, fig. 5C is a schematic application scenario of the virtual object control method provided in the embodiment of the present application, as shown in fig. 5C, when the terminal device receives an instruction triggered by the user to control the first virtual object 510 located at the first position in the virtual scene 500 to release the position exchange skill, it first detects whether the second virtual object 520 is within the scope of action 530 of the position exchange skill of the first virtual object 510, when it detects that the second virtual object 520 is within the scope of action 530 of the position exchange skill of the first virtual object 510, it further determines whether the position exchange skill of the first virtual object 510 is within the skill waiting time, when the time when the instruction is received has exceeded the skill waiting time, the terminal device needs to further determine whether the first virtual object 510 currently has a sufficient energy value, when the first virtual object 510 currently does not have a sufficient energy value, it may display the third prompt information 560 in a popup window manner, and the third prompt information 560 may be "the energy value is insufficient," pleased to continue to accumulate 50 energy values and then release the position exchange skill.
In some embodiments, the skill release condition of the location exchange skill may be related to whether there is an obstacle between the first virtual object and the second virtual object, and the terminal device may implement the above-mentioned detection of the location exchange skill of the first virtual object based on the skill release condition by: detecting an obstacle based on rays between a first position where a first virtual object is located and a second position where a second virtual object is located; when detecting that an obstacle exists between the first virtual object and the second virtual object, displaying fourth prompting information, wherein the fourth prompting information is used for prompting that the position exchange skill cannot be released due to the obstacle; when no obstacle is detected between the first virtual object and the second virtual object, a trigger operation is determined that will be responsive to a position exchange skill controlling the release of the first virtual object.
For example, before the terminal device responds to the triggering operation of the position exchange skill for controlling the release of the first virtual object, before the positions of the first virtual object and the second virtual object in the virtual scene are exchanged, whether an obstacle exists between the first virtual object and the second virtual object or not can be detected by sending rays, for example, after the user selects the second virtual object in the virtual scene, the terminal device sends rays from the first position where the first virtual object is located to the second position where the second virtual object is located, so as to detect whether the obstacle exists between the first virtual object and the second virtual object, and when the existence of the obstacle is detected, fourth prompt information is displayed, and the fourth prompt information is used for prompting the user that the position exchange skill cannot be released due to the existence of the obstacle, and meanwhile, how the user enters the position without the obstacle in the virtual scene can be prompted; when it is detected that there is no obstacle between the first virtual object and the second virtual object, the terminal device determines a trigger operation to be responsive to the position exchange skill controlling the release of the first virtual object.
For example, referring to fig. 5D, fig. 5D is an application scenario schematic diagram of a virtual object control method provided in the embodiment of the present application, as shown in fig. 5D, when a terminal device receives an instruction triggered by a user to control a first virtual object 510 located at a first position in a virtual scene 500 to release a position exchange skill, a ray 570 is sent to detect whether an obstacle exists between the first virtual object 510 and a second virtual object 520 located at a second position, and when it is detected that an obstacle 580 exists between the first virtual object 510 and the second virtual object 520, a fourth prompt message 590 may be displayed in a popup window manner, for example, the fourth prompt message 590 may be "the current path has an obstacle, and the position exchange skill is released again after moving elsewhere.
It should be noted that, in practical application, the skill release condition may be related to the scope of action of the position exchange skill, the skill waiting time, the energy value of the first virtual object, and whether there is an obstacle on the path of the position exchange at the same time, that is, only when the second virtual object is within the scope of action of the position exchange skill of the first virtual object and the time of receiving the trigger operation has exceeded the skill waiting time, the first virtual object has a sufficient energy value, and there is no obstacle between the first virtual object and the second virtual object, the terminal device determines that the positions of the first virtual object and the second virtual object in the virtual scene are to be exchanged in response to the trigger operation of the position exchange skill controlling the release of the first virtual object.
In other embodiments, the skill release condition of the location exchange skill may relate to a physical rule that needs to be met when exchanging the first virtual object and the second virtual object, and the terminal device may implement the above-mentioned detecting the location exchange skill of the first virtual object based on the skill release condition by: obtaining physical rules to be met when exchanging the positions of the first virtual object and the second virtual object, wherein the physical rules comprise at least one of the following: the first virtual object and the second virtual object are positioned in a position where enough space exists to accommodate each other; the path of the switching position can support the parallel passing of the first virtual object and the second virtual object; determining a trigger operation to be responsive to a location exchange skill controlling release of the first virtual object when the physical rule is met; and when the physical rule is not met, displaying fifth prompt information, wherein the fifth prompt information is used for prompting that the position exchange skill cannot be released due to the fact that the physical rule is not met and prompting that the position exchange skill is moved to a position meeting the physical rule in the virtual scene.
For example, before the terminal device responds to the triggering operation of the position exchange skill for controlling the release of the first virtual object, exchange the positions of the first virtual object and the second virtual object in the virtual scene, the terminal device may further obtain physical rules (for example, determine whether there is enough space for accommodating the first virtual object in the second position when the first virtual object moves to the second position where the second virtual object is located) which need to be met when the positions of the first virtual object and the second virtual object are exchanged (for example, determine whether the path of the exchanged position can support parallel passing of the first virtual object and the second virtual object, and the like), and when the physical rules are not met, display fifth prompt information on the human-computer interaction interface for prompting the user that the position exchange skill cannot be released due to the fact that the physical rules are not met (for example, the space of the second position where the second virtual object is located is too small), and prompt the user to control the first virtual object to move to the position which meets the physical rules in the virtual scene; when the physical rule is met, the terminal device determines a trigger operation to be responsive to the location exchange skills controlling the release of the first virtual object.
For example, referring to fig. 5E, fig. 5E is a schematic application scenario of the virtual object control method provided in the embodiment of the present application, as shown in fig. 5E, when a terminal device receives an instruction triggered by a user to control a first virtual object 510 located at a first position in a virtual scene 500 to release a position exchange skill, first, a physical rule to be met when the positions of the first virtual object 510 and the second virtual object 520 are exchanged is acquired, and when the physical rule is not met, fifth prompt information 5100 may be displayed in a popup window manner, for example, the fifth prompt information 5100 may be "the space where the current target exchange is located is too small to release the position exchange skill, please reselect the exchange target".
It should be noted that, in practical application, the skill release condition may be related to the scope of action of the position exchange skill, the skill waiting time, the energy value of the first virtual object, whether there is an obstacle on the path of the position exchange, and the physical rule to which the exchange position needs to conform at the same time, that is, only when the second virtual object is within the scope of action of the position exchange skill of the first virtual object and the time of receiving the trigger operation has exceeded the skill waiting time, the first virtual object currently has sufficient energy value, no obstacle exists between the first virtual object and the second virtual object, and the physical rule is met, the terminal device determines that the positions of the first virtual object and the second virtual object in the virtual scene are to be exchanged in response to the trigger operation of the position exchange skill controlling the release of the first virtual object.
In some embodiments, the terminal device may further perform the following operations, prior to the triggering operation in response to the location exchange skills controlling the release of the first virtual object: and displaying a locking identifier corresponding to the second virtual object, wherein the locking identifier is used for representing that the first virtual object can exchange positions with the second virtual object.
For example, referring to fig. 5F, fig. 5F is an application scenario schematic diagram of a virtual object control method provided in the embodiment of the present application, as shown in fig. 5F, after a position exchange skill of a first virtual object is detected by a terminal device based on a skill release condition (for example, an action range of a position exchange skill, a skill waiting time, an energy value of the first virtual object, etc.), a lock identifier 5100 is displayed on a second virtual object 520, where the lock identifier 5100 is used to prompt a user that the second virtual object 520 can exchange a position with the first virtual object 510 in the virtual scene 500.
In practical application, the lock mark may be text, or a combination of text and a specific pattern, besides the specific pattern shown in fig. 5F, which is not limited in this embodiment of the present application.
In other embodiments, the terminal device may implement the above-mentioned exchanging the positions of the first virtual object and the second virtual object in the virtual scene by: firstly, moving a first virtual object from a first position in a virtual scene to a second position in the virtual scene according to a preset speed (such as the fastest speed which can be realized by the first virtual object); then, the second position of the second virtual object in the virtual scene is directly updated to the first position in the virtual scene, or the second virtual object is moved from the second position in the virtual scene to the first position in the virtual scene according to a preset speed (such as the fastest speed that the second virtual object can achieve). That is, after the location exchange is completed, the first virtual object may appear at a second location in the virtual scene, and the second virtual object may appear at the first location in the virtual scene, so as to implement the exchange of the locations of the first virtual object and the second virtual object.
For example, the terminal device may implement the above-mentioned moving the first virtual object from the first position to the second position in the virtual scene according to the preset speed by: the first virtual object is controlled to travel at a preset speed on a switching path from the first position to the second position, and obstacles (objects which can obstruct the passing of the virtual object, such as big stones, pits and the like) and mechanical props (props which can cause damage to the virtual object and reduce the life value of the virtual object, such as mines, crossbows and the like) existing on the switching path are automatically avoided. For example, the first virtual object may be controlled to move in flight from a first location to a second location, avoiding obstacles and organ props present on the exchange path.
In some embodiments, the terminal device may further perform the following operations when controlling the first virtual object to travel at a preset speed on the exchange path from the first location to the second location: the energy value of the first virtual object is hidden (i.e., set to an unusable state) to mask triggering operations responsive to the position exchange skills that control repeated release of the first virtual object during travel. That is, by hiding the energy value of the first virtual object, the user cannot control the first virtual object to repeatedly release the position exchange skill in the process of performing position exchange on the first virtual object, so that software loopholes (bugs) possibly caused by repeatedly releasing the position exchange skill are avoided, and stability of software is ensured.
In other embodiments, the terminal device may further perform the following operations when controlling the first virtual object to travel at a preset speed on the exchange path from the first location to the second location: closing a collision box corresponding to the first virtual object, so that the first virtual object is prevented from being collided with other virtual objects in the travelling process, and the response of the first virtual object to external operation is put in a locking state, so that when the first virtual object is in the travelling process, a user cannot operate the first virtual object; and, after the position exchange is completed (i.e., after the first virtual object moves to the second position), the crash box corresponding to the first virtual object is restarted, and the lock state set for the first virtual object is released. That is, the user can continue to operate on the first virtual object during the completion of the location exchange.
In some embodiments, referring to fig. 6, after the terminal device performs step S102 shown in fig. 4, the terminal device may further perform step S103 shown in fig. 6, which will be described in connection with step S103 shown in fig. 6.
In step S103, in response to controlling the crossing operation of the first virtual object, a process in which the first virtual object crosses the obstacle by means of the second virtual object is displayed.
In some embodiments, when the height of an obstacle (e.g., a plateau) present in the virtual scene exceeds the maximum jump height that the first virtual object can achieve (e.g., assuming the maximum jump height that the first virtual object can achieve is 60 yards and the height of the plateau is 100 yards), the user may control the first virtual object to cross the obstacle via the second virtual object.
By way of example, assuming that there is an elevation a at a certain location of the virtual scene, where the elevation a has a height exceeding the maximum jump height that the first virtual object can achieve, i.e. cannot cross over the elevation a by virtue of the jump capability of the first virtual object itself, the first virtual object may be controlled to release a position exchange skill to move a second virtual object (e.g. a stone) into the vicinity of the elevation a, and then the first virtual object may cross over the elevation a by means of the stone (i.e. first controlling the first virtual object to climb onto the stone and then controlling the first virtual object to jump to cross over the elevation a), whereby the first virtual object is enabled to cross over an obstacle that was previously unable to cross over by way of the position exchange skill.
In other embodiments, the location exchange skills may be predicted by invoking a machine learning model, where the machine learning model may be executed locally at the terminal device, e.g., after the server trains the machine learning model, the server issues the trained machine learning model to the terminal device; the machine learning model may also be deployed in a server, for example, after the terminal device obtains feature data of the first virtual object, where the feature data includes at least one of: the method comprises the steps of uploading the feature data of a first virtual object, a first position of the first virtual object in a virtual scene and a second position of a second virtual object in the virtual scene to a server, so that the server calls a machine learning model based on the feature data, the first position and the second position of the first virtual object to obtain probabilities of a plurality of corresponding candidate skills, wherein the plurality of candidate skills comprise position exchange skills; and then, the server sends the obtained probabilities of the candidate skills to the terminal equipment, and when the maximum probability corresponds to the position exchange skill, sixth prompt information is displayed on a man-machine interaction interface of the terminal equipment, and the sixth prompt information is used for prompting the release of the position exchange skill.
It should be noted that, the machine learning model may be a neural network model (such as a convolutional neural network, a deep convolutional neural network, a fully connected neural network, etc.), a decision tree model, a gradient lifting tree, a multi-layer perceptron, a support vector machine, etc., and the type of the machine learning model is not specifically limited in the embodiments of the present application.
According to the control method for the virtual object, the position exchange skill of the first virtual object is controlled to release, so that the position exchange of the first virtual object and the second virtual object in the virtual scene is realized, the first virtual object can be directly moved from the first position in the virtual scene to the second position in the virtual scene, the interaction process is simplified, and further the consumption of computing resources is reduced.
In the following, taking a game scenario as an example, an exemplary application of the embodiment of the present application in an actual application scenario is described.
With the development of independent games and the increasing expansion of stand-alone game markets, the requirements of users (or players) on innovative originality and cultural aesthetic level of games are increasing, and the games of innovative playing mechanisms become important requirements, so that the improvement of the design originality level of the games is very critical.
For example, taking a 2D platform action game (a type of game with a long history, which is a type of game starting from a "brothers of horse XX", in the 2D platform action game, a player can control a game character to perform running, fight, and other actions under the influence of gravity, and the action test is taken as an example, the 2D platform action game has basically developed for many years, and at present, the related art is relatively less innovative for both a game level and a core mechanism.
For example, when controlling a game character to move in a game scene or to cross an obstacle in the game scene, the related art is generally implemented in such a way that the game character is controlled to jump (or sprint), or a shot hits a target wall in the game scene to be transferred.
However, in the manner of controlling the continuous movement of the game character (e.g., jumping, sprinting, etc.), there have been developed very much in the progress of the independent game, and the interactive process has also been complicated (i.e., the player needs to operate all the time in controlling the movement of the game character from one location to another location in the game scene), wasting the computing resources of the terminal device; in addition, for the delivery mode, the bullet also needs to take a long time in the flight process, and the quick exchange of direct clicking cannot be achieved, so that the game experience of the user is poor.
In view of this, for the 2D platform action game, the control method of the virtual object provided in the embodiments of the present application provides an innovative game mechanism, where a player can aim at a target exchange object (corresponding to the second virtual object) in a game scene through a mouse, and click to implement a rapid position exchange between a game character (corresponding to the first virtual object) and the target exchange object in the game scene, so as to generate many situations to overcome an obstacle in the game scene.
For example, the player may use the mouse to move the cursor to aim at the target exchange object in the game scene, when the intersection point of the connecting line between the target exchange object and the game character controlled by the player is free of an obstacle, the player may click the left button of the mouse to trigger the game character to release the position exchange skill, the energy value of the game character is consumed when the position exchange skill is released, and the energy value is restored when the game character lands (i.e. after the position exchange is completed), so that the bug caused by the game character release position exchange skill can be avoided from being infinitely controlled by the player when the game character is in the air. Then, after a short-term exchange animation is displayed in the man-machine interaction interface of the terminal device, the game role controlled by the player can appear at the position where the target exchange object is located in the previous position of the game role controlled by the player in the game scene, and the target exchange object can appear at the position where the game role controlled by the player is located in the previous position of the game role in the game scene, namely, the exchange of the positions of the game role and the target exchange object in the game scene is realized.
The virtual object control method provided by the embodiment of the application has a very wide application scene.
In some embodiments, obstacles in the game scene may be spanned by controlling the first virtual object to release the position exchange skill.
For example, referring to fig. 7A, fig. 7A is a schematic view of an application scenario of a virtual object control method provided in the embodiment of the present application, as shown in fig. 7A, a first virtual object 710 (for example, a game character controlled by a real player) located at a first position and a second virtual object 720 (for example, a stone existing in the game scenario) located at a second position are displayed in a game scenario 700, and when an obstacle 730 (for example, a pit hole) exists between the first virtual object 710 and the second virtual object 720, the player can pass over the obstacle 730 by controlling the first virtual object 710 to release a position exchange skill. For example, the player may first use the mouse to move a cursor to aim at a target exchange (e.g., the second virtual object 720 in fig. 7A) in the game scene 700, at which time an aperture for prompting locking may appear on the second virtual object 720 to prompt the player that the player will exchange positions with the second virtual object 720, and in addition, the outline of the second virtual object 720 may be displayed in a highlighted manner to more obviously prompt the player. Then, the player may click the left mouse button to trigger the first virtual object 710 to release the position exchange skill, when there is no obstacle (e.g., wall) between the first virtual object 710 and the second virtual object 720, after displaying the short exchange animation, the first virtual object 710 may appear at the position of the game scene 700 where the second virtual object 720 was previously located, and the second virtual object 720 may appear at the position of the game scene 700 where the first virtual object 710 was previously located, so that the first virtual object 710 is originally in front of the obstacle 730, and after passing through the position exchange, the first virtual object 710 appears at the other end of the obstacle 730, thereby completing the obstacle crossing.
In other embodiments, the previously non-spanable highways in the game scene may also be spanned by controlling the first virtual object release position exchange skills.
For example, referring to fig. 7B, fig. 7B is an application scenario schematic diagram of a virtual object control method provided in the embodiment of the present application, as shown in fig. 7B, when a player controls a first virtual object 710 to encounter an altitude 740 that cannot be spanned by the jumping capability of the first virtual object 710 itself in the moving process of a game scenario 700, the player may first control the first virtual object 710 to release a position exchange skill, for example, control the first virtual object 710 to release a position exchange skill, so as to exchange a second virtual object 720 (for example, a stone existing in the game scenario) in the game scenario 700 below the altitude 740, and then the player may control the first virtual object 710 to log on the second virtual object 720 and continue jumping, so as to increase the jumping height of the first virtual object 710, thereby logging on the altitude 740 that cannot be spanned originally, and achieving the purpose of obstacle crossing.
The method for controlling the virtual object provided in the embodiment of the present application is specifically described below from the technical side.
For example, referring to fig. 8, fig. 8 is a flowchart of a control method of a virtual object provided in an embodiment of the present application, as shown in fig. 8, a terminal device first determines a target exchange object in a game scene (i.e., a virtual object in the game scene that will perform position exchange with a game character controlled by a player), for example, a player may aim at the virtual object in the game scene by using a mouse to move a cursor, the terminal device takes the virtual object aimed by the player in the game scene as the target exchange object, and then, the terminal device determines whether an obstacle (e.g., a wall) exists between the game character controlled by the player and the target exchange object, for example, may determine whether a wall exists between the game character and the target exchange object by sending a ray; when an obstacle exists between the game character and the target exchange, corresponding prompt information is displayed in the man-machine interaction interface so as to prompt the player to reselect the target exchange.
When there is no obstacle between the game character and the target exchange, judging whether the game character is currently in the air, when the game character is not currently in the air, determining an instruction for controlling the game character to release position exchange skills by the terminal device, responding to the instruction triggered by a user, exchanging the positions of the game character and the target exchange in the game scene (for example, triggering the game character to release the position exchange skills when a player presses a left mouse button, locking the control input of the player, closing a collision box of the game character, so that the game character can not collide with other flying objects in the game scene when moving in the air, and moving the game character to the position of the target exchange in the game scene according to a certain upper speed limit, and for the target exchange, in order to avoid visual interference caused by the simultaneous movement of the target exchange and two virtual objects of the game character in the air, directly modifying the position of the target exchange into the position of the previous game character in the game scene after a certain delay.
When the game character is currently in the air, further judging whether the game character has enough energy value, and when the game character does not have enough energy value, ignoring a command triggered by a player for controlling the game character to release position exchange skills; when the game character has a sufficient energy value, the terminal device determines a release instruction to control the game character release position exchange skill in response to a player trigger, and exchanges the positions of the game character and the target exchange in the game scene.
According to the virtual object control method, an innovative game mechanism (namely the position exchange skill of the game role) is provided for the 2D platform action game, so that a plurality of special checkpoints and puzzles can be manufactured around the exchange mechanism, and compared with the mode of controlling the game role to jump or sprint provided by the related technology, the game fun is increased, and the game experience of a user is greatly improved.
Continuing with the description below of an exemplary architecture in which the virtual object control device 465 provided in embodiments of the present application is implemented as a software module, in some embodiments, as shown in fig. 3, the software modules stored in the virtual object control device 465 of the memory 460 may include: a display module 4651 and a switching module 4652.
A display module 4651, configured to display a virtual scene in the human-computer interaction interface, where the virtual scene includes a first virtual object located at a first location and a second virtual object located at a second location; a switching module 4652 for switching positions of the first virtual object and the second virtual object in the virtual scene in response to a trigger operation of a position switching skill controlling release of the first virtual object; the first virtual object is located at a second position, and the second virtual object is located at a first position.
In some embodiments, the control device 465 of the virtual object further comprises an acquisition module 4653 for acquiring a skill release condition corresponding to a location exchange skill of the first virtual object; the control means 465 of the virtual object further comprises a detection module 4654 for detecting a position exchange skill of the first virtual object based on a skill release condition.
In some embodiments, the detecting module 4654 is further configured to obtain an scope of the location exchange skills of the first virtual object; determining a trigger operation to be responsive to a location exchange skill controlling release of the first virtual object when the second virtual object is within the scope of action; when the second virtual object is located outside the action range, displaying first prompt information, wherein the first prompt is used for prompting that the position exchange skills cannot be released and prompting how to move into the action range.
In some embodiments, the obtaining module 4653 is further configured to obtain a status parameter of the first virtual object; the control means 465 of the virtual object further comprises a determining module 4655 for determining an extent of the position exchange skills of the first virtual object based on the status parameter; wherein the status parameter includes at least one of: the method comprises the steps of ranking of a first virtual object, activity level of the first virtual object and life value of the first virtual object.
In some embodiments, the detection module 4654 is further configured to obtain a skill latency of the location exchange skill of the first virtual object; when the interval between the first time and the second time is smaller than the skill waiting time, displaying second prompting information, wherein the second prompting information is used for prompting that the position exchange skill cannot be released and prompting the waiting time; determining a trigger operation that will exchange skills in response to a location controlling the release of the first virtual object when the interval between the first time and the second time is greater than or equal to the skill waiting time; wherein the first time is the time of last controlling the first virtual object to release the position exchange skill, and the second time is the time of receiving the trigger operation.
In some embodiments, the detection module 4654 is further configured to obtain an energy value of the first virtual object; determining a trigger operation to be responsive to the location exchange skills controlling the release of the first virtual object when the energy value of the first virtual object is greater than the energy value required to release the location exchange skills; when the first virtual object does not have enough energy value currently, third prompt information is displayed, wherein the third prompt information is used for prompting that the position exchange skills cannot be released and prompting that the energy value needs to be accumulated.
In some embodiments, detection module 4654 is further configured to detect an obstacle based on radiation between the first location and the second location; when detecting that an obstacle exists between the first virtual object and the second virtual object, displaying fourth prompting information, wherein the fourth prompting information is used for prompting that the position exchange skill cannot be released due to the obstacle; when no obstacle is detected between the first virtual object and the second virtual object, a trigger operation is determined that will be responsive to a position exchange skill controlling the release of the first virtual object.
In some embodiments, the detecting module 4654 is further configured to obtain physical rules that need to be met when exchanging the locations of the first virtual object and the second virtual object; wherein the physical rule comprises at least one of: the first virtual object and the second virtual object are positioned in a position where enough space exists to accommodate each other; the path of the switching position can support the parallel passing of the first virtual object and the second virtual object; determining a trigger operation to be responsive to a location exchange skill controlling release of the first virtual object when the physical rule is met; and when the physical rule is not met, displaying fifth prompt information, wherein the fifth prompt information is used for prompting that the position exchange skill cannot be released due to the fact that the physical rule is not met and prompting that the position exchange skill is moved to the position meeting the physical rule.
In some embodiments, the display module 4651 is further configured to display a lock identifier corresponding to the second virtual object; wherein the lock identifier is used to characterize that the first virtual object is capable of position exchange with the second virtual object.
In some embodiments, the exchange module 4652 is further configured to move the first virtual object from the first location to a second location in the virtual scene at a preset speed; and the method is used for directly updating the second position of the second virtual object in the virtual scene to the first position in the virtual scene, or moving the second virtual object from the second position to the first position in the virtual scene according to a preset speed.
In some embodiments, the exchange module 4652 is further configured to control the first virtual object to travel at a preset speed on an exchange path from the first location to the second location, and automatically evade obstacles and mechanical props present on the exchange path.
In some embodiments, the control 465 of the virtual object further includes a hiding module 4656 for hiding the energy value of the first virtual object to mask triggering operations responsive to the position exchange skills controlling repeated release of the first virtual object during travel.
In some embodiments, the control device 465 of the virtual object further includes a closing module 4657 for closing a crash box corresponding to the first virtual object and placing a response of the first virtual object to an external operation in a locked state; the control device 465 of the virtual object further includes an opening module 4658 for opening a crash box corresponding to the first virtual object and releasing a lock state set for the first virtual object.
In some embodiments, the display module 4651 is further configured to display a first virtual object located at a first location in the virtual scene; and in response to a trigger operation controlling the summoning skill released by the first virtual object, displaying the summoned second virtual object at a second position of the virtual scene.
In some embodiments, the display module 4651 is further configured to display a plurality of third virtual objects in a preset manner in the virtual scene; the determining module 4655 is further configured to, in response to a virtual object selection operation, treat a selected third virtual object of the plurality of third virtual objects as a second virtual object.
In some embodiments, the display module 4651 is further configured to display a process of the first virtual object crossing the obstacle with the second virtual object in response to controlling a crossing operation of the first virtual object; wherein the height of the obstacle exceeds the jump height that the first virtual object can achieve without the aid of the second virtual object.
In some embodiments, the obtaining module 4653 is further configured to obtain feature data of the first virtual object; the control means 465 of the virtual object further comprises a calling module 4659 for calling the machine learning model based on the feature data, the first location and the second location, resulting in probabilities of a corresponding plurality of candidate skills, the plurality of candidate skills comprising a location exchange skill; the display module 4651 is further configured to display a sixth prompt when the maximum probability corresponds to the location exchange skill, where the sixth prompt is used to prompt to release the location exchange skill; wherein the characteristic data includes at least one of: scope of action, skill waiting time, energy value.
It should be noted that, in the embodiment of the present application, the description of the device is similar to the implementation of the control method of the virtual object, and has similar beneficial effects, so that a detailed description is omitted. The technical details of the control device for a virtual object provided in the embodiment of the present application may be understood according to the description of any one of fig. 4, fig. 6, or fig. 8.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual object control method according to the embodiment of the present application.
The embodiments of the present application provide a computer readable storage medium storing executable instructions, wherein the executable instructions are stored, which when executed by a processor, cause the processor to perform a method provided by the embodiments of the present application, for example, a control method of a virtual object as shown in fig. 4, or fig. 6, or fig. 8.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the present application controls the first virtual object to release the position exchange skill, so that the position exchange of the first virtual object and the second virtual object in the virtual scene is realized, and thus, the first virtual object can be directly moved from the first position in the virtual scene to the second position in the virtual scene, the interaction process is simplified, and further, the consumption of computing resources is reduced.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (15)

1. A method for controlling a virtual object, the method comprising:
displaying a first virtual object located at a first position in a virtual scene, wherein the virtual scene is displayed through any one of the following visual angles: the virtual scene can be switched between the first person viewing angle, the third person viewing angle and the bird's-eye view angle at will;
Obtaining feature data of the first virtual object, wherein the feature data comprises at least one of the following: action range, skill waiting time, energy value;
invoking a machine learning model based on the feature data, the first location, and a second location of the virtual scene, to obtain probabilities of a corresponding plurality of candidate skills, the plurality of candidate skills including a location exchange skill;
when the maximum probability corresponds to the position exchange skill, displaying sixth prompt information, wherein the sixth prompt information is used for prompting the release of the position exchange skill;
responding to the triggering operation of calling skills released by the first virtual object, and displaying a second called virtual object at the second position;
responding to the triggering operation of the position exchange skill released by the first virtual object, moving the first virtual object from the first position to the second position according to a preset speed, and hiding the energy value of the first virtual object so as to shield the triggering operation of repeatedly releasing the position exchange skill in response to the first virtual object in the moving process, wherein the position exchange skill is predicted by calling the machine learning model;
Directly updating the second position of the second virtual object in the virtual scene to the first position in the virtual scene, or directly updating the second position of the second virtual object in the virtual scene to the first position in the virtual scene after a preset time;
in response to controlling a crossing operation of the first virtual object, displaying a process of crossing an obstacle by the first virtual object with the second virtual object;
wherein the height of the obstacle exceeds the jump height that the first virtual object can achieve without the aid of the second virtual object.
2. The method of claim 1, wherein prior to the triggering operation in response to the location exchange skills controlling the release of the first virtual object, the method further comprises:
acquiring skill release conditions corresponding to the position exchange skills of the first virtual object;
detecting a location exchange skill of the first virtual object based on the skill release condition.
3. The method of claim 2, wherein the detecting the location exchange skills of the first virtual object based on the skill release condition comprises:
Acquiring the action range of the position exchange skill of the first virtual object;
determining a trigger operation to be responsive to a location exchange skill controlling the release of the first virtual object when the second virtual object is within the scope of action;
when the second virtual object is located outside the action range, displaying first prompt information, wherein the first prompt is used for prompting that the position exchange skills cannot be released and prompting how to move into the action range.
4. A method according to claim 3, wherein said obtaining the scope of the location exchange skills of the first virtual object comprises:
acquiring state parameters of the first virtual object;
determining an action range of the position exchange skill of the first virtual object based on the state parameter;
wherein the status parameter includes at least one of: the first virtual object comprises a grade of the first virtual object, an activity level of the first virtual object and a life value of the first virtual object.
5. The method of claim 2, wherein the detecting the location exchange skills of the first virtual object based on the skill release condition comprises:
Acquiring skill waiting time of the position exchange skill of the first virtual object;
when the interval between the first time and the second time is smaller than the skill waiting time, displaying second prompting information, wherein the second prompting information is used for prompting that the position exchange skill cannot be released and prompting the waiting time;
determining a trigger operation to be responsive to a position exchange skill controlling release of the first virtual object when the interval between the first time and the second time is greater than or equal to the skill waiting time;
wherein the first time is a time when the first virtual object was last controlled to release the location exchange skill, and the second time is a time when the trigger operation is received.
6. The method of claim 2, wherein the detecting the location exchange skills of the first virtual object based on the skill release condition comprises:
acquiring an energy value of the first virtual object;
determining a trigger operation to be responsive to a location exchange skill controlling the release of the first virtual object when an energy value of the first virtual object is greater than an energy value required to release the location exchange skill;
And when the first virtual object does not have enough energy value currently, displaying third prompt information, wherein the third prompt information is used for prompting that the position exchange skills cannot be released and prompting that the energy value needs to be accumulated.
7. The method of claim 2, wherein the detecting the location exchange skills of the first virtual object based on the skill release condition comprises:
detecting an obstacle based on a ray between the first location and the second location;
when detecting that an obstacle exists between the first virtual object and the second virtual object, displaying fourth prompting information, wherein the fourth prompting information is used for prompting that the position exchange skill cannot be released due to the obstacle;
when no obstacle is detected between the first virtual object and the second virtual object, a trigger operation is determined that will be responsive to a position exchange skill controlling the release of the first virtual object.
8. The method of claim 2, wherein the detecting the location exchange skills of the first virtual object based on the skill release condition comprises:
acquiring physical rules which need to be met when the positions of the first virtual object and the second virtual object are exchanged;
Wherein the physical rule includes at least one of: the first virtual object and the second virtual object are positioned in a position where enough space exists to accommodate each other; the path of the switching position can support the parallel passing of the first virtual object and the second virtual object;
determining a trigger operation to be responsive to a location exchange skill controlling release of the first virtual object when the physical rule is met;
and when the physical rule is not met, displaying fifth prompt information, wherein the fifth prompt information is used for prompting that the position exchange skill cannot be released due to the fact that the physical rule is not met and prompting that the position exchange skill is moved to a position meeting the physical rule.
9. The method of claim 1, wherein prior to the triggering operation in response to the location exchange skills controlling the release of the first virtual object, the method further comprises:
displaying a locking identifier corresponding to the second virtual object;
wherein the lock identifier is used to characterize that the first virtual object is capable of exchanging position with the second virtual object.
10. The method of claim 1, wherein the moving the first virtual object from the first position to the second position at a preset speed comprises:
And controlling the first virtual object to travel on a switching path from the first position to the second position according to a preset speed, and automatically avoiding obstacles and mechanism props existing on the switching path.
11. The method of claim 10, wherein, while controlling the first virtual object to travel at a preset speed on the swap path from the first location to the second location, the method further comprises:
closing a collision box corresponding to the first virtual object, and placing the response of the first virtual object to external operation in a locking state;
after the location exchange is completed, the method further comprises:
and opening a collision box corresponding to the first virtual object, and releasing the locking state set for the first virtual object.
12. The method of claim 1, wherein prior to the triggering operation in response to the location exchange skills controlling the release of the first virtual object, the method further comprises:
displaying a plurality of third virtual objects in the virtual scene in a preset mode;
and responding to a virtual object selection operation, and taking the selected third virtual object as the second virtual object.
13. A control apparatus for a virtual object, the apparatus comprising:
the display module is used for displaying a first virtual object positioned at a first position in a virtual scene, wherein the virtual scene is displayed through any one of the following visual angles: the virtual scene can be switched between the first person viewing angle, the third person viewing angle and the bird's-eye view angle at will;
the device comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring the characteristic data of the first virtual object, and the characteristic data comprises at least one of the following components: action range, skill waiting time, energy value; invoking a machine learning model based on the feature data, the first location, and a second location of the virtual scene, to obtain probabilities of a corresponding plurality of candidate skills, the plurality of candidate skills including a location exchange skill; when the maximum probability corresponds to the position exchange skill, displaying sixth prompt information, wherein the sixth prompt information is used for prompting the release of the position exchange skill;
the display module is further used for responding to the triggering operation of the calling skill released by the first virtual object and displaying a second virtual object which is called at the second position;
The switching module is used for responding to the triggering operation of the position switching skill for controlling the first virtual object to be released, and moving the first virtual object from the first position to the second position according to a preset speed;
the hiding module is used for hiding the energy value of the first virtual object so as to shield and respond to the triggering operation of controlling the first virtual object to repeatedly release the position interaction skill in the moving process, wherein the position interaction skill is obtained by calling the machine learning model for prediction;
the exchange module is further configured to directly update the second position where the second virtual object is located in the virtual scene to the first position in the virtual scene, or directly update the second position where the second virtual object is located in the virtual scene to the first position in the virtual scene after a preset time elapses;
the display module is further used for displaying a process that the first virtual object crosses an obstacle by means of the second virtual object in response to controlling the crossing operation of the first virtual object, wherein the height of the obstacle exceeds the jump height which can be achieved by the first virtual object when the second virtual object is not used.
14. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the method of controlling a virtual object according to any one of claims 1 to 12 when executing executable instructions stored in said memory.
15. A computer readable storage medium storing executable instructions for implementing the method of controlling a virtual object according to any one of claims 1 to 12 when executed by a processor.
CN202110441617.7A 2021-04-23 2021-04-23 Virtual object control method and device, electronic equipment and storage medium Active CN113018862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110441617.7A CN113018862B (en) 2021-04-23 2021-04-23 Virtual object control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110441617.7A CN113018862B (en) 2021-04-23 2021-04-23 Virtual object control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113018862A CN113018862A (en) 2021-06-25
CN113018862B true CN113018862B (en) 2023-07-21

Family

ID=76457546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110441617.7A Active CN113018862B (en) 2021-04-23 2021-04-23 Virtual object control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113018862B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114470775A (en) * 2022-01-27 2022-05-13 腾讯科技(深圳)有限公司 Object processing method, device, equipment and storage medium in virtual scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10496156B2 (en) * 2016-05-17 2019-12-03 Google Llc Techniques to change location of objects in a virtual/augmented reality system

Also Published As

Publication number Publication date
CN113018862A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
WO2022105552A1 (en) Information processing method and apparatus in virtual scene, and device, medium and program product
US20230078440A1 (en) Virtual object control method and apparatus, device, storage medium, and program product
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
US20230398453A1 (en) Virtual item processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
WO2023109288A1 (en) Method and apparatus for controlling game-opening operation in virtual scene, and device, storage medium and program product
CN114344906A (en) Method, device, equipment and storage medium for controlling partner object in virtual scene
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN113144603A (en) Method, device, equipment and storage medium for switching call objects in virtual scene
CN114225372B (en) Virtual object control method, device, terminal, storage medium and program product
WO2022156629A1 (en) Virtual object control method and apparatus, and electronic device, storage medium and computer program product
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN113703654B (en) Camouflage processing method and device in virtual scene and electronic equipment
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN114042315A (en) Virtual scene-based graphic display method, device, equipment and medium
CN114247132B (en) Control processing method, device, equipment, medium and program product for virtual object
TWI831074B (en) Information processing methods, devices, equipments, computer-readable storage mediums, and computer program products in virtual scene
WO2024060924A1 (en) Interaction processing method and apparatus for virtual scene, and electronic device and storage medium
WO2024027292A1 (en) Interaction method and apparatus in virtual scene, electronic device, computer-readable storage medium, and computer program product
CN113769392B (en) Method and device for processing state of virtual scene, electronic equipment and storage medium
TWI831066B (en) Method for state switching in virtual scene, device, apparatus, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046010

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant