CN112245921B - Virtual object control method, device, equipment and storage medium - Google Patents

Virtual object control method, device, equipment and storage medium Download PDF

Info

Publication number
CN112245921B
CN112245921B CN202011287725.5A CN202011287725A CN112245921B CN 112245921 B CN112245921 B CN 112245921B CN 202011287725 A CN202011287725 A CN 202011287725A CN 112245921 B CN112245921 B CN 112245921B
Authority
CN
China
Prior art keywords
virtual object
key frame
frame data
data
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011287725.5A
Other languages
Chinese (zh)
Other versions
CN112245921A (en
Inventor
黄超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011287725.5A priority Critical patent/CN112245921B/en
Publication of CN112245921A publication Critical patent/CN112245921A/en
Application granted granted Critical
Publication of CN112245921B publication Critical patent/CN112245921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding

Abstract

The application relates to a virtual object control method, a virtual object control device, virtual object control equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a scene picture of a virtual scene, wherein the scene picture comprises a target virtual object; acquiring object position information; acquiring first key frame data from each key frame data, wherein each key frame data is generated respectively based on each control operation of a user on a virtual object in the virtual scene; and controlling the target virtual object based on the user operation data in the first key frame data in response to the first position and the second position meeting a specified condition. By the scheme, the key frame data can be generated according to the control operation of the user on the virtual object, and the target virtual object is controlled through the user operation data based on the position information of the target virtual object and the position information of the key frame data, so that the accuracy of virtual object control is improved.

Description

Virtual object control method, device, equipment and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a virtual object.
Background
In many application programs (such as a virtual reality application program, a three-dimensional map program, a first-person shooting game, a multi-player online tactical competition game and the like) for constructing a virtual scene, an automatic module in the application program can automatically control a virtual object to complete specified operation.
In the related technology, an operation video of a virtual object in a specific scene can be manually recorded, an image and a corresponding action of the virtual object in the video process are stored, then the video image is extracted and input to a deep network trained through sample data, so that the probability of the virtual object executing the corresponding action is output, and the terminal can realize the automatic operation of the virtual object in the specific scene by executing the action with the highest probability output by the deep network.
However, in the related art, the deep learning network is trained by a large number of samples, so that overfitting is easy to occur, and the accuracy of controlling the virtual object to perform automation operation in a specified scene in actual application is low.
Disclosure of Invention
The embodiment of the application provides a virtual object control method, a virtual object control device and a storage medium, which can control a virtual object according to user operation data and improve the accuracy of virtual object control, and the technical scheme is as follows:
in one aspect, a method for displaying a virtual item is provided, where the method is executed by a terminal, and the method includes:
displaying a scene picture of a virtual scene, wherein the scene picture comprises a target virtual object;
acquiring object position information, wherein the object position information is used for indicating the position of the target virtual object in the virtual scene;
acquiring first key frame data from each key frame data, wherein each key frame data is generated respectively based on each control operation of a user on a virtual object in the virtual scene; each key frame data comprises user operation data of corresponding control operation and operation position information; the operation position information is used for indicating the position of a corresponding virtual object in the virtual scene when corresponding control operation occurs;
in response to the first position and the second position meeting a specified condition, controlling the target virtual object based on the user operation data in the first key frame data; the first position is a position indicated by the object position information, and the second position is a position indicated by the operation position information in the first key frame data.
In another aspect, a virtual object control method is provided, the method being performed by a terminal, the method including:
displaying a scene picture of a virtual scene, wherein the scene picture comprises a target virtual object;
responding to the target virtual object located at the designated position in the virtual scene, and controlling the target virtual object according to designated user operation;
the specified user operation is a control operation executed when the virtual object is located at the specified position in the process that the user controls the virtual object in the virtual scene.
In still another aspect, there is provided a virtual object control apparatus for a terminal, the apparatus including:
the scene picture display module is used for displaying a scene picture of a virtual scene, and the scene picture comprises a target virtual object;
an object position obtaining module, configured to obtain object position information, where the object position information is used to indicate a position of the target virtual object in the virtual scene;
the first key frame acquisition module is used for acquiring first key frame data from each key frame data, and each key frame data is generated respectively based on each control operation of a user on a virtual object in the virtual scene; each key frame data comprises user operation data of corresponding control operation and operation position information; the operation position information is used for indicating the position of a corresponding virtual object in the virtual scene when corresponding control operation occurs;
the virtual object control module is used for responding to that the first position and the second position meet a specified condition, and controlling the target virtual object based on the user operation data in the first key frame data; the first position is a position indicated by the object position information, and the second position is a position indicated by the operation position information in the first key frame data.
In one possible implementation, the apparatus further includes:
a position distance obtaining module, configured to obtain a position distance, where the position distance is a distance between the first position and the second position;
and the specified condition acquisition module is used for determining that the first position and the second position meet the specified condition in response to the position distance being smaller than a distance threshold.
In one possible implementation, the apparatus further includes:
a second position moving module, configured to control the target virtual object to move to the second position in response to the position distance not being less than the distance threshold.
In one possible implementation, the second position moving module includes:
a relative position acquisition unit configured to acquire a relative positional relationship between the first position and the second position in response to the position distance not being less than the distance threshold;
a displacement parameter obtaining unit for obtaining a displacement control parameter based on the relative position relationship;
and a second position moving unit for controlling the target virtual object to move to the second position based on the displacement control parameter.
In a possible implementation manner, the key frame data further includes a time parameter; the time parameter is used for indicating the time when the corresponding control operation occurs;
the virtual object control module includes:
a time interval acquisition unit for acquiring a first time interval and a second time interval; the first time interval is an interval between the acquisition time of the object position information and the time of controlling the target virtual object based on the user operation data in the key frame data last time; the second time interval is a time interval between the first control operation and the second control operation; the first control operation is a control operation corresponding to the first key frame data, and the second control operation is a previous control operation of the first control operation in each control operation;
and a virtual object control unit, configured to, in response to that the first time interval is not less than the second time interval and that the first position and the second position satisfy the specified condition, control the target virtual object based on user operation data in the first key frame data.
In a possible implementation manner, the virtual object control module further includes:
an object position maintaining module, configured to maintain the position of the target virtual object unchanged in response to the first time interval being less than the second time interval.
In one possible implementation, the apparatus further includes:
and the first operation execution module is used for responding to the movement control of the target virtual object in a first time interval and the position of the target virtual object is not changed in the first time interval, and controlling the target virtual object to execute a first specified operation.
In one possible implementation, the first specified operation includes at least one of a jump operation and an attack operation.
In one possible implementation, the first operation execution module includes:
a second key frame acquisition unit, configured to acquire second key frame data in response to that movement control is performed on the target virtual object within a second time interval, and a position of the target virtual object does not change within the first time interval, where the second time interval is a last time interval in which the target virtual object is controlled to perform a first specified operation; the second key frame data is data of which the position indicated by the operation position information is closest to the current position of the target virtual object in each key frame data;
a third position moving unit, configured to control the target virtual object to move to a third position based on the second key frame data, where the third position is a position indicated by the operation position information in the second key frame data.
In a possible implementation manner, the user operation data is used for controlling the target virtual object to perform at least one of a jump operation, a move operation, an attack operation, and a skill release operation.
In one possible implementation, the apparatus further includes:
and the performance data output module is used for outputting the performance data of the terminal in the process of operating the virtual scene.
In yet another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the virtual object control method described above.
In yet another aspect, a computer-readable storage medium is provided, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the above virtual object control method.
In yet another aspect, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual object control method.
The technical scheme provided by the application can comprise the following beneficial effects:
and generating each key frame data according to the control operation of the user on the virtual object, and controlling the target virtual object to execute the operation corresponding to the user operation data according to the user operation data in the first key frame data when the second position indicated by the operation position information of the first key frame data and the first position indicated by the object position information of the virtual object meet specified conditions. According to the scheme, the key frame data are generated according to the control operation of the user on the virtual object, the target virtual object is controlled through the user operation data based on the object position information of the target virtual object and the operation position information of the key frame data, when the control target virtual object executes the operation corresponding to the user operation data, the position information of the target virtual object is considered, and the accuracy of virtual object control is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a virtual object control system provided in accordance with an exemplary embodiment.
FIG. 2 is a schematic illustration of a display interface of a virtual scene provided in accordance with an exemplary embodiment.
FIG. 3 is a flowchart illustrating a virtual object control method according to an example embodiment.
FIG. 4 is a method flow diagram illustrating a virtual object control method in accordance with an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating an automated flow of a virtual object based on key frame playback according to the embodiment shown in fig. 4.
Fig. 6 is a schematic diagram illustrating a game process according to the embodiment shown in fig. 4.
Fig. 7 is a schematic diagram illustrating a recording operation according to the embodiment shown in fig. 4.
Fig. 8 shows a key frame playback diagram according to the embodiment shown in fig. 4.
Fig. 9 shows a schematic diagram of game recording according to the embodiment shown in fig. 4.
FIG. 10 illustrates a flow diagram of a virtual object control method.
Fig. 11 is a block diagram illustrating a structure of a virtual item presentation apparatus according to an exemplary embodiment.
FIG. 12 illustrates a block diagram of a computer device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Virtual scene: is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments illustrate the virtual scene as a two-dimensional virtual scene, but are not limited thereto. Optionally, the virtual scene may also be used for at least one virtual object to interact with a virtual building in the virtual scene. Optionally, the virtual scene may also be used for virtual scene engagement between at least two virtual objects. Optionally, the virtual scene may also be used for a virtual firearm fight between at least two virtual objects. Optionally, the virtual scene may be further configured to use a virtual firearm for fighting between at least two virtual objects within a target area range, and the target area range may be changed to a new target area range as one of the at least two virtual objects reaches a designated area within the target area range.
Virtual object: refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Alternatively, when the virtual scene is a two-dimensional virtual scene, the virtual object may be a two-dimensional planar model. Optionally, when the virtual scene is a two-dimensional virtual scene, the virtual object is also a three-dimensional model created based on an animation skeleton technology. Each virtual object has a shape and a volume in the three-dimensional virtual scene, the orientation is determined according to the display direction of the two-dimensional virtual scene, and the virtual object occupies a part of space in the two-dimensional virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
Virtual props: the tool is a tool which can be used by a virtual object in a virtual environment, and comprises a virtual weapon which can hurt other virtual objects, such as a pistol, a rifle, a sniper, a dagger, a knife, a sword, an axe and the like, and a supply tool such as a bullet, wherein a quick cartridge clip, a sighting telescope, a silencer and the like are arranged on the appointed virtual weapon, and can provide a virtual pendant with partial added attributes for the virtual weapon, and defense tools such as a shield, a armor, a armored car and the like.
Horizontal plate game: the game is a two-dimensional layout game which is set by fixing the motion of a camera on a plane to express the relationship between a character and a scene and create a playing method, namely, a scroll map which is in a 'left-right moving type' and is formed by fixing a picture on a horizontal plane. The horizontal type games can be divided into various types of horizontal type games such as single-screen layouts, multi-screen reels, continuous worlds, depth scenes and the like. The single-screen layout means that the game role can only act on a fixed two-dimensional layout; the multi-screen scroll refers to that after specified operation of a game role is completed on a certain fixed two-dimensional layout, a virtual scene can be updated to other two-dimensional layout scenes in a scroll opening mode; the continuous world means that the two-dimensional layout is a larger virtual scene, the display equipment only correspondingly displays part of the two-dimensional layout according to the position of the virtual object, and the displayed two-dimensional layout can change according to the position movement of the virtual object; in the depth scene, on the basis of the three two-dimensional layout designs, the depth information is added into the layout, and the virtual object can change the depth information in a specified mode, so that a three-dimensional scene with the depth information is simulated on the two-dimensional layout.
Reference is now made to FIG. 1, which is a diagram illustrating a virtual object control system in accordance with an exemplary embodiment. The virtual prop display system may include: a first terminal 110, a server 120, and a second terminal 130.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multiplayer online battle program. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be any one of a Multiplayer Online Battle Arena Games (MOBA), a horizontal action shooting Game, and a Simulation strategy Game (SLG). In the present embodiment, the application 111 is exemplified as a landscape action shooting game. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment for activity, where the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking, shooting, attacking, throwing, releasing skills. Illustratively, the first virtual object is a first virtual character, such as a simulated character or an animation character.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multiplayer online battle program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on the screen of the second terminal 130. The client may be any one of a MOBA game, a big-fleeing shooting game, an SLG game, and a horizontal action game, and in this embodiment, the application 131 is exemplified as the horizontal action game. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform an activity, and the second virtual object may be referred to as a master virtual object of the second user 132. Illustratively, the second virtual object is a second virtual character, such as a simulated character or an animation character.
Optionally, the first virtual object and the second virtual object are in the same virtual world. Optionally, the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals that may access the server 120 in different embodiments. Optionally, one or more terminals are terminals corresponding to the developer, a development and editing platform for supporting the application program in the virtual environment is installed on the terminal, the developer can edit and update the application program on the terminal and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the application program installation package from the server 120 to update the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for applications that support a two-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a memory 121, a processor 122, a user account database 123, a combat services module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of a user account used by the first terminal 110, the second terminal 130, and other terminals, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a two-dimensional virtual scene, please refer to fig. 2, which is a schematic diagram of a display interface of a virtual scene provided according to an exemplary embodiment. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual object 210, an environment screen 220 of the two-dimensional virtual scene, and a virtual object 240. The virtual object 240 may be a virtual object controlled by a user or a virtual object controlled by an application program corresponding to other terminals.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the currently controlled virtual object 210, for example, as shown in fig. 2, the environment picture 220 of the displayed two-dimensional virtual scene is a sky 224, a horizon 223, a hill 221, and a factory building 222 viewed from the perspective of the currently controlled virtual object 210, wherein the currently controlled virtual object 210 is displayed above the horizon 223, the sky 224 and the hill 221 can be displayed on a display device as a background map, the factory building 222 can be a building modeled and displayed according to building data, and a user can control the virtual object 210 to perform an interactive operation with the factory building 222.
The currently controlled virtual object 210 may release skills or use virtual props, move and execute a specified action under the control of the user, and the virtual object in the virtual scene may show different two-dimensional models under the control of the user, for example, a screen of the terminal supports touch operation, and a scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual object 210 may execute the specified action in the virtual scene and show a currently corresponding two-dimensional model.
Please refer to fig. 3, which is a flowchart illustrating a virtual object control method according to an exemplary embodiment. The method may be performed by a computer device, which may be a terminal. As shown in fig. 3, the computer device may present the virtual item by performing the following steps.
Step 301, a scene picture of a virtual scene is displayed, wherein the scene picture includes a target virtual object.
Step 302, object position information is obtained, where the object position information is used to indicate a position of the target virtual object in the virtual scene.
Step 303, acquiring first key frame data from each key frame data, wherein each key frame data is generated based on each control operation of a user on a virtual object in the virtual scene; each key frame data comprises user operation data of corresponding control operation and operation position information; the operation position information is used for indicating the position of the corresponding virtual object in the virtual scene when the corresponding control operation occurs.
Wherein the first key frame data is one of the key frame data.
In a possible implementation manner, each piece of key frame data is background data acquired according to an application program interface corresponding to the virtual scene.
The API may be an API (Application Programming Interface), and the API is configured to obtain real-time data corresponding to the virtual object in response to a specified operation on the recording control, and obtain real-time data including user operation data in the real-time data as each piece of key frame data.
In one possible implementation, the recording control may be a control superimposed on the virtual screen.
In another possible implementation manner, the recording control may be a control on another terminal, and when the terminal needs to record real-time data of a virtual object in a virtual scene in the terminal, the other terminal may call, through the recording control, an application program interface corresponding to the virtual scene of the terminal in a wireless or wired connection manner, and the like, to obtain the real-time data corresponding to the virtual object.
Step 304, in response to the first position and the second position meeting a specified condition, controlling the target virtual object based on the user operation data in the first key frame data; the first position is a position indicated by the object position information, and the second position is a position indicated by the operation position information in the first key frame data.
In summary, in the solution shown in the embodiment of the present application, each piece of key frame data is generated according to a control operation of a user on a virtual object, and when a second position indicated by operation position information of the first key frame data and a first position indicated by object position information of the virtual object satisfy a specified condition, the target virtual object may be controlled to execute an operation corresponding to the user operation data according to the user operation data in the first key frame data. According to the scheme, the key frame data are generated according to the control operation of the user on the virtual object, the target virtual object is controlled through the user operation data based on the object position information of the target virtual object and the operation position information of the key frame data, when the control target virtual object executes the operation corresponding to the user operation data, the position information of the target virtual object is considered, and the accuracy of virtual object control is improved.
Reference is now made to FIG. 4, which is a flowchart illustrating a method of virtual object control in accordance with an exemplary embodiment. The method may be performed by a computer device, which may be a terminal. As shown in fig. 4, the computer device may control the virtual object by performing the following steps:
step 401, a scene picture of a virtual scene is displayed.
In one possible implementation, a scene picture of the virtual scene is displayed in response to the operation of the user and the specified control.
The specified control can be a playback control in an application program corresponding to a virtual scene in the terminal; and displaying a scene picture of the virtual scene on the terminal in response to the triggering operation of the playback control by the user.
In one possible implementation, the scene picture may include at least two target virtual objects.
The scene picture can be a virtual scene displayed by a plurality of terminals in an online mode, the plurality of terminals are connected online through wired or wireless connection, the virtual scene is displayed on each terminal, and the virtual scene comprises a target virtual object corresponding to each terminal.
Step 402, object location information is obtained.
In one possible implementation, the object location information may be used to indicate location information corresponding to a plurality of target virtual objects.
The virtual scenes corresponding to the multiple target virtual objects may be the same virtual scene, that is, the object position information may be used to indicate the positions of the target virtual objects in the virtual scene.
In step 403, first key frame data is obtained from each key frame data.
In one possible implementation, each of the key frame data may be key frame data having a specified order, where the first key frame data is one of the key frame data read in each of the key frame data according to the specified order.
In a possible implementation manner, each piece of key frame data is key frame data with a time sequence, that is, each piece of key frame data may include a time parameter, and the specified sequence of each piece of key frame data is determined according to the time parameter corresponding to each piece of key frame data.
In a possible implementation manner, real-time data of a virtual object is acquired based on an application program interface corresponding to the virtual scene; and acquiring the key frame data from the real-time data of the virtual object, wherein the real-time data comprises user operation data and operation position information.
The real-time data is interface data for continuously recording background data corresponding to the virtual object through an application program interface according to the designated frequency.
In one possible implementation, the specified frequency may be a frequency that is preset by a developer.
In one possible implementation manner, the specified frequency may be a frequency determined according to terminal information corresponding to the virtual scene.
In one possible implementation, the terminal information may be hardware device information of the terminal. Before the recording control is operated, acquiring hardware equipment information (such as memory size and other information) of the terminal, and determining the size of the specified frequency according to the hardware equipment information of the terminal.
For example, when the memory of the terminal is large, the recording control can occupy a larger memory space when running, so that the recording control can support a higher recording frame rate; when the memory of the terminal is small, the running effect of the game itself may be affected when the recording space is running, and if the recording frame rate is set to be high, the game may be jammed or even broken, so that the designated frequency needs to be set to be a small value.
In one possible implementation, the specified frequency may be a frequency obtained according to a virtual setting corresponding to a virtual scene.
For example, the virtual setting corresponding to the virtual scene may be a frame rate setting corresponding to the virtual scene, that is, 30 frames of pictures are displayed every second or 60 frames of pictures are displayed every second, at this time, the specified frequency may record real-time data according to the frame rate of the pictures displayed every second, and the real-time data recorded according to the specified frequency better conforms to the real display effect of the pictures.
In a possible implementation manner, the real-time data corresponding to the target virtual object, including the user operation data of the corresponding control operation and the data of the operation position information, is acquired as the key frame data.
The real-time data of the virtual object includes user operation data and operation position information of the corresponding control operation, and the data is generated by the application program in the background after the virtual object receives an effective operation instruction of a user, and the data including the user operation data and the operation position information of the corresponding control operation is acquired as key frame data, which indicates that the virtual object successfully executes some operation in the frame.
In one possible implementation, the key frame data is data of a time frame corresponding to the operation instruction input by the user when the operation instruction is valid.
The time frame is obtained by time division according to a specified frequency; the valid operation instruction is an instruction for successfully instructing the virtual object to execute a corresponding operation, among the operation instructions input to the virtual object in the virtual scene by the user.
For example, a user touches an "a skill control" superimposed on a virtual scene, where the "a skill control" is used to control a virtual object to release an "a skill", and when the "a skill" is in a CD (cold down) state or in an unavailable state due to other reasons, the virtual object does not respond to an operation instruction input by the user, so that the virtual object does not perform an operation of releasing the "a skill", where the "a skill" is not an effective instruction, the virtual object does not perform a corresponding operation on the instruction, and an interface cannot acquire a skill parameter corresponding to the virtual object and the "a skill", and at this time, the interface does not acquire a user operation parameter and operation position information corresponding to the virtual object.
And when the skill A is in an available state, the virtual object responds to an operation instruction input by a user, the operation of releasing the skill A is executed in the time frame, the skill parameter corresponding to the skill A is generated in background data, the interface can acquire the skill parameter corresponding to the skill A in the time frame and acquire the skill parameter as the user operation parameter in the key frame data, and the corresponding position when the virtual object releases the skill A is acquired as the operation position information.
In one possible implementation manner, in response to the operation of the playback control by the user, the first key frame data is obtained from the key frame data.
In one possible implementation manner, the virtual scene and the virtual object in the virtual scene are displayed on the terminal in response to the operation of the playback control by the user; and acquiring first key frame data from each key data.
Before the virtual scene and the virtual objects in the virtual scene are displayed on the terminal, that is, before a game starts, a user can enter the game through operation of the playback control on a display interface for entering the game to display the virtual scene and the virtual objects in the virtual scene, and acquire the key frame data corresponding to the virtual scene.
In one possible implementation, the user operation data is used to control the target virtual object to perform at least one of a jump operation, a move operation, an attack operation, and a skill release operation.
The user operation data may be data corresponding to a user controlling the target virtual object to execute a specified operation through an effective operation instruction, and when the target virtual object executes different operations, the corresponding user operation data are also different. For example, when the virtual object performs a jump operation, the corresponding user operation data is jump operation data; or when the virtual object performs a move operation, the corresponding user operation data may be move operation data.
In a possible implementation manner, the user operation data may include at least one of jump operation data, movement operation data, attack operation data, and skill release operation data, and history operation information of the user on the virtual object is recorded by the user operation data, that is, the type and the number of operations performed by the user to control the virtual object may be recorded according to the user operation data.
At step 404, a location distance is obtained, where the location distance is a distance between the first location and the second location.
Wherein the first position is a position indicated by the object position information, that is, the first position is a current position of the target virtual object; the second position is the position indicated by the operation position information in the first key frame data, that is, the second position is the position where the target virtual object is located when the control target virtual object performs an operation according to the user operation information in the first key frame data.
After the first position and the second position of the virtual object are obtained, the distance between the first position and the second position needs to be calculated first, and the distance between the first position of the virtual object and the operation position information in the first key frame data is determined.
In response to the location distance being less than a distance threshold, step 405 determines that the first location and the second location satisfy the specified condition.
And when the position distance is smaller than the distance threshold, the current position of the target virtual object meets the position requirement of controlling the target virtual object through the key frame data.
In one possible implementation, the target virtual object is controlled to move to the second location in response to the location distance not being less than the distance threshold.
At this time, when the position distance is not less than the distance threshold, it means that the position where the target virtual object is located does not meet the position requirement for controlling the target virtual object through the key frame data, and therefore, the target virtual object needs to be controlled to move to the second position, so as to realize the position correction of the target virtual object.
In one possible implementation manner, in response to the position distance not being smaller than the distance threshold, acquiring a relative position relationship between the first position and the second position; obtaining a displacement control parameter based on the relative position relation; and controlling the target virtual object to move to the second position based on the displacement control parameter.
For example, when the second position corresponding to the first key frame data of the virtual object is at point B, but the first position of the virtual object in the virtual environment is point a, and point a is on the left side of point B, the terminal obtains the distance between the two points AB and the left side of point a on point B, calculates the velocity value and the time value of the virtual object from B to a (i.e. moving to the left), and controls the virtual object to move from B to point a according to the calculated velocity value and time value, so as to complete the position correction of the target virtual object.
In a possible implementation manner, the first key frame data is the first data in the key frame data for controlling the virtual object; the first position may be a starting position of the virtual object at this time, i.e., an initial position of the virtual object generated in the virtual scene.
When the difference between the initial position of the virtual object and the second position of the virtual object corresponding to the first key frame data is not greater than the distance threshold, controlling the virtual object to execute the operation corresponding to the operation parameter of the first key frame data according to the operation parameter corresponding to the first key frame data, and reaching the position indicated by the object position information in the key frame data after the virtual object is controlled according to the first key frame data.
When the distance between the initial position of the target virtual object and the second position of the first key frame data is greater than a distance threshold value, according to the relative position relationship between the initial position of the target virtual object and the second position of the first key frame data; acquiring initial operating parameters; and controlling the virtual object to move to a second position of the virtual object corresponding to the first key frame data according to the initial operating parameter.
In a possible implementation manner of the embodiment of the present application, before the user operates the recording control, an operation instruction may have been input to the virtual object, and therefore, a position indicated by the object position information of the first key frame data acquired through the interface may not be a start position of the virtual object, so that before the virtual object in the virtual environment is controlled according to the first key frame data, the position of the virtual object needs to be determined, and when the start position of the virtual object is different from a second position (a second position) of the virtual object corresponding to the first key frame data (that is, when the distance difference is greater than the distance threshold), the position of the virtual object may be corrected.
In one possible implementation manner, the relative position relationship includes a horizontal coordinate difference value and a vertical coordinate difference value between a first position of the target virtual object and a second position corresponding to the first key frame data.
In a possible implementation manner, the movement parameter, the jump parameter, and the time identifier corresponding to the virtual object are obtained according to the horizontal coordinate difference and the vertical coordinate difference. Wherein the moving parameter is used for indicating the size and direction of the moving speed of the virtual object, and the jumping parameter is used for indicating whether the virtual object jumps or not; the time identifier is used for indicating the virtual object to execute a time frame corresponding to the jump operation and the move operation.
In one possible implementation manner, obtaining obstacle information between a first position of the virtual object and a second position corresponding to the first key frame data; and acquiring the movement parameter, the jumping parameter and the time identifier of the target virtual object according to the obstacle information, the horizontal coordinate difference and the vertical coordinate difference.
Wherein the obstacle information may include at least one of preset obstacle information, building information, and trap information. The obstacle information is used for indicating the position and the size of the obstacle in the virtual scene.
When the virtual object does not reach the second position corresponding to the first key frame data, an obstacle or a trap and the like may exist in the virtual object; at this time, the terminal may obtain, according to the interface, obstacle information included in a first position of the target virtual object in the virtual scene and a second position corresponding to the first key frame data, and calculate a movement parameter, a jump parameter, and a time stamp of the target virtual object according to a position and a size of the obstacle information in the virtual scene. For example, when the virtual object is located at point C, but the second position of the virtual object corresponding to the next key frame data is located at point D, and there is a step between point C and point D, the terminal may obtain the position information and the size information of the step in the virtual scene according to the interface, so as to calculate the operation parameters such as the moving direction, the moving speed, the jumping height, the jumping time, and the like required when the target virtual object reaches the point D, and control the virtual object to move to point D according to the operation parameters.
Step 406, in response to the first position and the second position satisfying the specified condition, controlling the target virtual object based on the user operation data in the first key frame data.
Wherein the specified condition is that the distance between the first position and the second position is less than a distance threshold. The distance between the first position and the second position is smaller than a distance threshold, that is, the object position information of the target virtual object and the position indicated by the position operation information in the first key frame data are smaller than a threshold, and at this time, the current position of the target virtual object corresponds to the position corresponding to the user operation data in the first key frame data, and the target virtual object may be controlled according to the user operation data in the first key frame data.
In a possible implementation manner, the key frame data further includes a time parameter; the time parameter is used for indicating the time when the corresponding control operation occurs; acquiring a first time interval and a second time interval; the first time interval is an interval between the acquisition time of the object position information and the time of controlling the target virtual object based on the user operation data in the key frame data last time; the second time interval is a time interval between the first control operation and the second control operation; the first control operation is a control operation corresponding to the first key frame data, and the second control operation is a previous control operation of the first control operation in each control operation; and in response to the first time interval not being smaller than the second time interval and the first position and the second position meeting the specified condition, controlling the target virtual object based on the user operation data in the first key frame data.
In one possible implementation, the time parameter is used to indicate a time corresponding to the key frame data.
Wherein the time may be the intra-office time of the game, or the time may be an absolute time (i.e., the real time acquired by the terminal), that is, for a series of key frame data, the time parameter may be used to indicate the sequence of the key frame data and the time of the interval between two adjacent key frame data. When the virtual object is in a series of continuous operation states, the corresponding key frame data is often continuous real-time data, that is, real-time data recorded according to a specified frequency; when an input instruction to the virtual object is not received, that is, the virtual object is in an unoperated state within a certain period of time, no user operation data exists in the real-time data of the virtual object acquired through the interface, and the real-time data within the period of time cannot be acquired as key frame data.
In one possible implementation, object location information of the target virtual object is obtained based on the specified frequency.
The object position information of the target virtual object is obtained by the terminal based on a specified frequency, that is, the terminal detects the object position information of the target virtual object at regular intervals according to the specified frequency, and obtains a first position corresponding to the object position information and a time parameter corresponding to the object position information. The first time interval is used for indicating the time corresponding to the object position information of the target object obtained this time and the time interval between the time of operating the target virtual object based on the user operation information corresponding to the key frame data last time; that is, the first time interval is used to indicate a time difference between the last time the target virtual object was operated by the user operation information and the current time. The second time interval is used to indicate a time difference between the first key frame data (i.e., the key frame data of the next operation on the target virtual object according to the user operation information) and the key frame data corresponding to the previous operation on the target virtual object corresponding to the first key frame data.
In one possible implementation, the position of the target virtual object is maintained in response to the first time interval being less than the second time interval.
And when the first time interval is smaller than the second time interval, namely the time corresponding to the object position information at the moment does not reach the time corresponding to the time parameter in the first key frame data, not controlling the target virtual object at the moment and keeping the position of the target virtual object unchanged.
In the key frame data obtained from the real-time data, there may be a large difference in time parameter between the first key frame data and the key frame data (previous key frame data) corresponding to the user operation information for controlling the target virtual object immediately before the control target virtual object based on the first key frame data. For example, before the event that the terminal performs the control operation on the target virtual object according to the first key frame data occurs, the difference between the time corresponding to the last key frame data and the time corresponding to the first key frame data is greater than a threshold (for example, the difference between the times is 2 seconds, the threshold is 1 second), and accordingly, in the real-time data received through the application program interface and used for controlling the virtual object by the user, the real-time data of 2 seconds does not include user operation data and operation position information, at this time, the user may actively stop the control operation on the virtual user in order to meet the reasons of game mechanism, skill waiting CD and the like, after the occurrence of the last event that requires a control operation to be performed on the target virtual object based on the key frame data, and keeping the target virtual object still so as to repeat the operation and control process of the user on the virtual object in the recording process as much as possible.
In a possible implementation manner, when at least one interactive module exists in the virtual scene, the static threshold value is obtained according to the consumption time of the interactive module with the minimum consumption time in the at least one interactive module in the virtual scene; the interaction module is used for interacting with the virtual object to trigger the specified event.
The interactive module can be a preset organ in a virtual scene, and the target virtual object starts an event corresponding to the organ by executing a specified operation with the interactive module. For example, when the organ is a gain module, the user may control the virtual object to move to the interaction module, and when the interaction module detects that the virtual object is located in the interaction module for a certain time, the virtual object obtains a gain effect corresponding to the interaction module, such as an attribute value increase; the authority may also be used to trigger the passage of virtual objects to a particular scene, etc.
Therefore, when the time interval between the last key frame data and the first key frame data is longer than the minimum time corresponding to the interactive module in the virtual scene, the virtual object is controlled according to the user operation data of the first key frame data, the virtual object needs to be stationary at the position for a period of time, different situations caused by the fact that certain organs are not opened and the virtual object is controlled by a user during recording are avoided, and unpredictable influences on the follow-up control of the virtual object based on the key frame data are avoided as much as possible.
In one possible implementation manner, in response to that a second time interval is not greater than a time threshold, and the first time interval is not less than the second time interval, and the first location and the second location satisfy the specified condition, the target virtual object is controlled based on the user operation data in the first key frame data.
In one possible implementation, in response to the second time interval being greater than the time threshold, the target virtual object is controlled based on the user operation data in the first key frame data.
When the second time threshold is not greater than the time threshold, that is, the time interval between the time parameter corresponding to the previous key frame data and the time interval corresponding to the first key frame data is small, it indicates that the user input instruction may be lost due to the terminal being stuck or network delay, and at this time, the target virtual object may be directly controlled according to the user operation data of the first key frame data.
In one possible implementation manner, in response to that the movement control is performed on the target virtual object in the first time interval, and the position of the target virtual object is not changed in the first time interval, the target virtual object is controlled to perform the first specified operation.
The first time interval is used for judging whether the virtual object is in a blocked state or not, when the virtual object performs moving operation according to the moving parameters in the key frame data and the position of the virtual object is not changed in the first time interval, the situation that the virtual object is possibly blocked at a certain place in a virtual scene is shown, and the problem of the blockage of the virtual object at the moment is difficult to solve only through the moving operation performed by the virtual object at the moment; when the virtual object performs a moving operation according to the moving parameters in the key frame data, but the time when the position of the virtual object is not changed is less than the first predetermined time, it indicates that the virtual object may be jammed due to a terminal or network delay, and the problem of virtual object jamming has been successfully solved.
In one possible implementation, the first specified operation includes at least one of a jump operation and an attack operation.
The virtual object jam may be due to a randomly refreshed hostile virtual object, or the terminal does not respond normally when performing a jump operation, or a network delay occurs.
In a possible implementation manner, when the virtual object performs a moving operation according to the moving parameter, and the position of the virtual object is not changed in the first time interval, and when an enemy virtual object exists in the virtual scene, the target virtual object is controlled to perform an attack operation on the enemy virtual object according to the object position information of the target virtual object and the object position information of the enemy virtual object.
In the process of each game, a randomly refreshed enemy virtual object may exist, and key frame data corresponding to the virtual object operated and controlled by the user is acquired through the interface, so that the randomly refreshed enemy virtual object may not be killed completely. When the virtual object is blocked because the randomly refreshed enemy virtual object cannot be normally killed through the operation corresponding to the key frame data, namely the terminal detects that the enemy virtual object except the virtual object controlled according to the key frame data exists in the virtual scene, the attack direction of the virtual object on the enemy virtual object can be obtained according to the position information of the virtual object and the position information of the enemy virtual object, and the attack operation is executed according to the attack direction.
In another possible implementation manner, when the virtual object executes a moving operation according to the moving parameter and the position of the virtual object is not changed in the first time interval, acquiring obstacle information in a virtual scene; the obstacle information includes position information of an obstacle in the virtual scene and size information of the obstacle; and when the difference between the position of the obstacle closest to the virtual object in the virtual scene and the position of the virtual object is smaller than a threshold value, controlling the virtual object to execute jumping operation according to the size information of the obstacle.
When the terminal acquires the obstacle information in the virtual scene, and the position difference between the obstacle closest to the virtual object in the virtual scene and the virtual object is smaller than the threshold value, it indicates that the virtual object is very close to the obstacle, the possibility that the virtual object is stuck due to the obstacle is high, and at the moment, the virtual object is selected to execute jumping operation according to the size of the obstacle. For example, when the obstacle size is high, the virtual object may be required to perform a large jump (jump with large force), or even a double jump (jump again in the air of the first jump), to reach a specified height before passing; for example, when the size of the obstacle is small, executing a large jump or a two-step jump may cause the virtual object to reach a position outside the key frame data indication, and in this case, only the virtual object needs to be controlled to execute a small jump (jump with small strength).
In one possible implementation manner, in response to that the movement control is performed on the target virtual object within a second time interval, and the position of the target virtual object is not changed within the second time interval, acquiring second key frame data, wherein the second time interval is a time interval after the target virtual object is controlled to perform the first specified operation; the second key frame data is data of which the position indicated by the operation position information is closest to the current position of the target virtual object in each key frame data;
and controlling the target virtual object to move to a third position based on the second key frame data, wherein the third position is the position indicated by the operation position information in the second key frame data.
In a possible implementation manner, after the virtual object performs the first specifying operation, if the position of the virtual object is not changed within the second duration interval, it indicates that the virtual object is still in the blocked state, and the first specifying operation cannot solve the blocked condition of the virtual object. At this time, in order to make the virtual object get out of the stuck state, a second specified operation needs to be performed on the virtual object, and based on the second key frame data, the target virtual object is controlled to move to a third position, so that the virtual object is prevented from being in the stuck state all the time.
In one possible implementation, the second specifying operation includes:
acquiring second key frame data in the key frame data, wherein the second key frame data is the key frame data with the operation position information most similar to the object position information of the target virtual object; the third position is the position indicated by the operation position information of the second key frame data; and controlling the target virtual object to move to the third position according to the operation position information of the second key frame data.
And according to the clamping position of the virtual object, finding the operation position information (namely the operation position information of the second key frame data) corresponding to the position closest to the clamping position of the virtual object from the positions indicated by the operation position information corresponding to the key frame data, and controlling the virtual object to move to the position (third position) corresponding to the operation position information of the moving key frame data. At this time, the virtual object may control the virtual object to execute a designated operation according to the operation parameter of the mobile key frame data and the operation parameter corresponding to each key frame data after the mobile key frame data, so as to implement the automatic control of the virtual object again.
In one possible implementation manner, in response to that the target virtual object is subjected to movement control within a third time interval, and the position of the target virtual object is not changed within the third time interval, acquiring third key frame data, wherein the third time interval is a last time interval for controlling the target virtual object to perform the first specified operation; the third key frame data is data of which the position indicated by the operation position information is second closest to the current position of the target virtual object in each key frame data; and controlling the target virtual object to move to a fourth position based on the third key frame data, wherein the fourth position is the position indicated by the operation position information in the third key frame data.
That is, when the virtual object moves to the position indicated by the operation position information of the nearest key frame data and is still in the stuck state in the third time interval, the key frame data corresponding to the operation position information indicating the second proximity position may be selected as the third key frame data, and the virtual object may move to the position corresponding to the operation position information of the third key frame data, so as to attempt to solve the stuck state.
In a possible implementation manner, the time parameter corresponding to the second key frame data is smaller than the time parameter corresponding to the first key frame data.
That is, the game time or the real time corresponding to the second key frame is prior to the game time or the real time corresponding to the first key frame data, that is, the event for controlling the virtual object based on the user operation data corresponding to the second key frame occurs prior to the event for controlling the target virtual object based on the user operation data corresponding to the first key frame. When the virtual object is stuck, the key frame data of the target virtual object which is manipulated can be selected as the second key frame data and the third key frame data. Because the virtual object is usually blocked in the process of advancing, the key frame data corresponding to the position behind the virtual object can be used as the second key frame data or the third key frame data, and the virtual object can successfully return to the position corresponding to the second key frame data or the third key frame data, so that the problem of virtual object blocking is solved.
When the virtual object is stuck, the key frame data of the virtual object which is manipulated can be selected as the second key frame data. Since the virtual object is usually stuck in the process of advancing, the key frame data corresponding to the position behind the virtual object is used as the second key frame data, and the virtual object can successfully return to the position corresponding to the second key frame data, so that the problem that the virtual object is stuck when moving is solved.
In one possible implementation manner, in response to that the movement control is performed on the target virtual object in the second time length interval and the position of the target virtual object does not change in the second time length interval, the operation position information of the second key frame data is acquired as the object position information of the target virtual object.
When the target virtual object performs the first specified operation, the moving operation is performed in the second time interval, and the position is not changed yet, it is indicated that the target virtual object is still in the blocked state, at this time, the position corresponding to the operation position information of the second key frame data is directly determined as the position of the target virtual object, that is, the target virtual object is directly moved to the position corresponding to the second key frame data.
In one possible implementation manner, the performance data of the terminal in the process of running the virtual scene is output.
When the terminal executes the operation corresponding to the key frame data according to the key frame data control target virtual object, the terminal can simultaneously acquire the performance data of the virtual scene operated by the terminal, and the performance data is used for indicating the load condition of the virtual scene operated by the terminal.
The scheme shown in the embodiment of the application can also be used for automatically testing the terminal, the terminal controls the target virtual object to realize automatic operation according to the key frame data, and the terminal simultaneously outputs the performance data in the process of running the virtual scene so as to realize the automatic test of the terminal on the virtual scene.
According to the scheme, the key frame data can be generated through the control operation of the user on the virtual object, and when the performance test is conducted on the terminal running the virtual scene, the user is simulated through the key frame data to control the target virtual object.
Please refer to fig. 5, which illustrates a flowchart of an automated virtual object based on keyframe playback according to an embodiment of the present application. As shown in fig. 5, the virtual object automation includes the following steps:
in step 501, a game sample is recorded.
We have selected a more difficult level from the scenario levels of a landscape action game that requires successive jumps (double jumps) at specified locations to reach the next destination. Please refer to fig. 6, which shows a schematic diagram of a game process according to an embodiment of the present application. As shown in fig. 6, a box 601 in fig. 6 corresponds to a starting point of a jump, a box 602B corresponds to an ending point of the jump, a game character needs to jump up at the position of the box a in an oblique direction to reach the position of the box B, which needs higher motion fineness.
Step 502, constructing key frame features.
The frame of action of the player is called a key frame (namely key frame data), in the process of recording the game by the player (if only video is available, it is difficult to extract key data such as specific position, moving direction and the like of the game character in the map), the action executed by the player and the state of the game character are recorded through the game interface, the executed action comprises the direction of the rocker in the x and y axes, jumping, releasing skill and corresponding skill ID (identification), and the state of the game character comprises the position of the game character in the horizontal direction and the vertical direction in the map (which is obtained through the game interface). Please refer to fig. 7, which illustrates a schematic diagram of a recording operation according to an embodiment of the present application, as shown in fig. 7:
recording the moving direction according to the area 701 of the rocker, wherein 702 is a skill one control, 703 is a skill two control, 704 is a skill three control, 705 is a jump control, corresponding to jump operation, and 706 is a gun firing control, namely corresponding to common attack. In the process that the user controls the virtual object to execute the game action through the control, the function in the game receives relevant action parameters, and the game data is acquired by capturing the action parameters. Once any green box skill is used, we need to record the game skill ID to facilitate subsequent playback.
The constructed feature vector is shown as follows:
f=[statejump,stateshoot,stateskill,statemove,posx,posy,joystickx,joysticky,skilID,time]
wherein, statejump,stateshoot,stateskill,statemoveCorresponding to binary states of jump, attack, skill in use, and movement, a corresponding action is performed on the frame represented by 1, a corresponding action is not performed on the frame represented by 0, posx,posyCorresponding to the x-axis and y-axis coordinates of the game character in the checkpointx,joystickyCorresponding to the directions of the rocker on the x axis and the y axis, if the value corresponding to the x axis is greater than 0, the character moves rightwards, otherwise, the character moves leftwards, if the value corresponding to the y axis is greater than 0, the character moves upwards, otherwise, the character moves upwardsMoving downwards, the smilid corresponds to an identifier of a game character release skill, when a game character releases a certain skill in the frame, the identifier corresponding to the skill is arranged in the feature vector, the time corresponds to the time of the frame, and the time information is stored in order to reproduce the time sequence relationship between actions during playback, for example, the time interval between the two key frames is longer when the character needs to stop for a longer time at a specified place to trigger a switch during recording. During playback, it is necessary to determine the length of time that the game character is stationary from the time interval.
For example, the recording frequency may be set to 1 second 30 frames, and if the game character is not still and the player has a corresponding operation, the frame is taken as a key frame, and the above-described features are recorded.
Step 503, playback of player behavior based on the keyframes.
After recording a game, playing back the player behavior in the game based on the characteristics of the key frame (the player behavior is reproduced through the game AI). The playback process specifically comprises the following steps: firstly, the game AI reads the data of a first frame in a key frame, the data comprises information of the position, the direction of a rocker, the using skill and the like of a game character, if the current AI position is far away from the position in the key frame, the game character needs to be controlled to firstly arrive at a target position, then the skill is moved, jumped and released according to the recorded rocker direction, jump and skill information, then the data of a second key frame is obtained, the time difference between the second key frame and the first key is calculated, and if the time difference is more than 1, the game character needs to be stopped in place for a period of time (the stopping time is the previously calculated time difference). And then, calculating the deviation between the current role and the recording position of the second key frame, and if the current role is far away from the target position, moving to the target position first and then executing corresponding operation. And the game behaviors recorded by the key frames are executed by analogy. Please refer to fig. 8, which illustrates a key frame playback diagram according to an embodiment of the present application. As shown in fig. 8:
s801, if the time interval between the current time and the last operation is greater than the time interval of the corresponding key frame in the recording process (this can keep the behavior that the game character stops at a specific location and does not act in the game process, because this may wait for a skill CD or trigger a specific organization), the game character moves to the position of the character recorded in the corresponding key frame. Otherwise, the game character remains in a stationary state. This is done to try to restore the time intervals between different key frames in the recording.
S802, if the deviation (calculation Euclidean distance) between the position of the current game role and the position of the game role recorded by the corresponding key frame is less than 0.5, executing the action of the corresponding key frame. The aim of this is to ensure that the position of the operation performed during playback and the position of the recording operation are as consistent as possible.
And S803, in the execution process, firstly judging whether the game role in the characteristics recorded by the key frames in the recorded video moves, if the game role moves, executing the movement operation according to the direction of the recorded rocker, then judging whether the game role jumps in the characteristics recorded by the key frames, if the game role jumps, executing the jump operation, judging whether the game role is in gun shooting and skill releasing by using a similar method, and if the game role has corresponding operation, executing gun shooting and skill releasing of the corresponding skill ID. After the operation is performed, the process returns to step 801.
And step 504, monitoring and processing abnormal conditions.
In the playback process, the action execution and the recording process have a little deviation (due to different sensitivities of different mobile phones, the execution result of the same action is different, meanwhile, a randomly-appearing monster exists in the level, the action reproduction process can be influenced), and the game character can be in a blocked state. The scheme detects the blocking state according to the position of the game role. If the game character is in a moving state, and meanwhile, the position of the game character in the current frame is not changed from the position before 2 seconds, the game character is in a blocked state. The cause of the jam may be that the enemy unit of the current scene is not clear or that there is an obstacle in front. Once it is determined that the game character is in a stuck state, the game character performs gun-opening and jumping operations, and most of the stuck situations can be handled in this way. If the game is still in the blocked state, the key frame with the closest position in the recording process needs to be found according to the current position of the game role, the key frame is moved to the position recorded by the key frame, and the subsequent operation is executed after the key frame is reached.
And step 505, testing the performance of the client.
The game AI reproduces the player behavior through the steps, records client performance data of the game through the existing performance detection tool, and outputs a client performance test report, wherein the client performance data comprises key information such as the frame rate, the pause condition and the mobile phone temperature of the game. The game AI of a specific level is quickly realized through recording and playback in the scheme, and meanwhile, a clamping strategy and position calibration are designed to process abnormal states in the game. And recording performance data in the game process in the automatic AI game playing process, and finally outputting performance test reports of different mobile phones. This can reduce the manpower input, and the action of the game character is generally consistent (abnormal situations such as jamming are rare).
The level design of the horizontal action type game is complex, for example, two continuous jumps towards a specified direction at a specified place are needed to reach the next scene, which requires the AI to complete very fine actions. In this scenario, the position deviation of the point where the AI jumps finely from the designated point is small, while the deviation of the direction of the jump from the designated direction is small. Only when the AI action has high fineness, the complex level of the horizontal action type game can be finished. The game AI needs to be tested on a plurality of mobile phones with different configurations, due to different sensitivities of different mobile phones and random monsters existing in games, deviation possibly exists between the game AI and recorded behaviors in the game playing process, and abnormal conditions such as jamming are caused. During the playing process of the AI, recording the client performance data of the game, including the frame rate, the pause condition, the mobile phone temperature and other key information of the game, and outputting a client performance test report based on the data.
The scheme shown in the embodiment of the application mainly aims to reduce the training cost of the horizontal-version action game AI, improve the fineness of the AI action and finish the complex game stage. Firstly, recording a game, recording behavior data of a player, and then playing back the game behavior based on the player data (the behavior of the player is reproduced through AI), wherein in the playing back process, a stuck state in the game needs to be detected, and once abnormality such as sticking occurs, a pre-defined escaping strategy needs to be executed. Please refer to fig. 9, which illustrates a schematic diagram of game recording according to an embodiment of the present application. As shown in fig. 9, a player may control a virtual object 901 through a virtual control superimposed on a virtual scene, and after a game starts, the player may start a recording process for the virtual scene corresponding to the virtual object through a specified operation (e.g., touch, click, slide, etc.) on a recording control 902, where a time control 903 is used to indicate a running time of the game. The virtual controls manipulated by the player may include a movement control 904 and an execution control 905, where the execution control 905 further includes a jump control, a firing control, a skill control, and the like, and an effect control 906 is further superimposed on the virtual scene, and the effect control 906 is used to indicate a gain effect and a reduction effect possessed by the player at this time.
The scheme shown in the embodiment of the application only needs to record one game manually (only needs to record the action characteristics of key frames in the game played by a player, and the game behavior of the player can be well reproduced through the strategy of playback and exception handling), thereby greatly reducing the labor and time costs. According to the horizontal action game automation method based on key frame playback, the game AI in a complex scene can be quickly realized only by recording one game by a player, and the use threshold of the game AI is greatly reduced. Meanwhile, a playback scheme with strong robustness is provided, the abnormal state of the game is detected through a blocking detection strategy, and once the abnormality is found, a predefined abnormality processing strategy is executed.
In summary, in the solution shown in the embodiment of the present application, each piece of key frame data is generated according to a control operation of a user on a virtual object, and when a second position indicated by operation position information of the first key frame data and a first position indicated by object position information of the virtual object satisfy a specified condition, the target virtual object may be controlled to execute an operation corresponding to the user operation data according to the user operation data in the first key frame data. According to the scheme, the key frame data are generated according to the control operation of the user on the virtual object, the target virtual object is controlled through the user operation data based on the object position information of the target virtual object and the operation position information of the key frame data, when the control target virtual object executes the operation corresponding to the user operation data, the position information of the target virtual object is considered, and the accuracy of virtual object control is improved.
Referring to fig. 10, a flow diagram of a virtual object control method is shown. As shown in fig. 10, the virtual object control method is applied to a virtual object control device 1000, where the virtual object control device 1000 may be a terminal, and when a user controls a virtual object through an operation control superimposed on a virtual scene, a screen 1001 for controlling the virtual object to perform a specified operation may be recorded through a recording control, real-time data corresponding to the screen 1001 for the virtual object to perform the specified operation is acquired through a game interface corresponding to the virtual screen, and real-time data in which operation parameters corresponding to the virtual object are not all 0 in the real-time data is acquired as key frame data 1002 of the virtual object in the virtual scene.
After the recording of the screen 1001 for controlling the virtual object to perform the designated operation through the recording control is finished, that is, after all the key frame data corresponding to the virtual object in the virtual scene has been obtained, the virtual object may be controlled to perform the operation corresponding to the operation parameter in the key frame data according to the key frame data, and after the virtual object is controlled to perform the operation 1003 corresponding to the operation parameter in the key frame data according to any one of the key frames, a first position 1004 (that is, a position corresponding to the object position information of the virtual object at this time) after the virtual object performs the operation corresponding to the operation parameter in the key frame data is obtained, and the operation to be performed next by the virtual object should be the operation corresponding to the operation parameter of the next key frame data. At this time, the first position 1004 of the virtual object needs to be compared with the second position (the position corresponding to the operation position information of the next key frame) 1005 corresponding to the next key frame data of the virtual object, and when the first position of the virtual object is the same as (or not greater than) the second position corresponding to the next key frame data, it is determined that the virtual object has normally reached the second position 1005 corresponding to the next key frame data of the virtual object, and the operation corresponding to the operation parameter of the next key frame data can be normally executed; when the first position of the virtual object is different from (or greater than) the second position corresponding to the next key frame data (or greater than the threshold), that is, the virtual object is considered not to have normally reached the second position 1005 corresponding to the next key frame data of the virtual object, the virtual object cannot normally perform the operation corresponding to the operation parameter of the next key frame data, so that it is necessary to perform the position correction 1006 on the first position 1004 of the virtual object to move the virtual object from the first position 1004 to the second position 1005 corresponding to the next key frame data.
After the virtual object is located at the second position 1005 corresponding to the next key frame data, it is further required to determine a difference between a time parameter corresponding to the previous key frame data and a time parameter corresponding to the next key frame data, and when the difference between the times of the two key frame data is not greater than a threshold, it indicates that the virtual object does not stop operating at the position corresponding to the next key frame data, that is, the virtual object directly performs operation 1008 corresponding to the operation parameter of the next key frame data; when the time difference between two pieces of key frame data is greater than the threshold, it indicates that the virtual object needs to remain stationary 1007, and the time value of the stationary is the difference between the two pieces of key frame data, and after the virtual object remains stationary for a certain time, the virtual object is controlled to perform the designated operation 1008 according to the operation parameter of the next key frame data.
Please refer to fig. 11, which is a block diagram illustrating a structure of a virtual item display device according to an exemplary embodiment. The virtual item exhibition device can implement all or part of the steps in the method provided by the embodiment shown in fig. 3 or fig. 4, and the virtual item exhibition device includes:
a scene picture display module 1101, configured to display a scene picture of a virtual scene, where the scene picture includes a target virtual object;
an object position obtaining module 1102, configured to obtain object position information, where the object position information is used to indicate a position of the target virtual object in the virtual scene;
a first key frame obtaining module 1103, configured to obtain first key frame data from each piece of key frame data, where each piece of key frame data is generated based on each control operation of a user on a virtual object in the virtual scene; each key frame data comprises user operation data of corresponding control operation and operation position information; the operation position information is used for indicating the position of a corresponding virtual object in the virtual scene when corresponding control operation occurs;
a virtual object control module 1104, configured to control the target virtual object based on the user operation data in the first key frame data in response to the first location and the second location satisfying a specified condition; the first position is a position indicated by the object position information, and the second position is a position indicated by the operation position information in the first key frame data.
In one possible implementation, the apparatus further includes:
a position distance obtaining module, configured to obtain a position distance, where the position distance is a distance between the first position and the second position;
and the specified condition acquisition module is used for determining that the first position and the second position meet the specified condition in response to the position distance being smaller than a distance threshold.
In one possible implementation, the apparatus further includes:
a second position moving module, configured to control the target virtual object to move to the second position in response to the position distance not being less than the distance threshold.
In one possible implementation, the second position moving module includes:
a relative position acquisition unit configured to acquire a relative positional relationship between the first position and the second position in response to the position distance not being less than the distance threshold;
a displacement parameter obtaining unit for obtaining a displacement control parameter based on the relative position relationship;
and a second position moving unit for controlling the target virtual object to move to the second position based on the displacement control parameter.
In a possible implementation manner, the key frame data further includes a time parameter; the time parameter is used for indicating the time when the corresponding control operation occurs;
the virtual object control module 1104 includes:
a time interval acquisition unit for acquiring a first time interval and a second time interval; the first time interval is an interval between the acquisition time of the object position information and the time of controlling the target virtual object based on the user operation data in the key frame data last time; the second time interval is a time interval between the first control operation and the second control operation; the first control operation is a control operation corresponding to the first key frame data, and the second control operation is a previous control operation of the first control operation in each control operation;
and a virtual object control unit, configured to, in response to that the first time interval is not less than the second time interval and that the first position and the second position satisfy the specified condition, control the target virtual object based on user operation data in the first key frame data.
In a possible implementation manner, the virtual object control module 1104 further includes:
an object position maintaining module, configured to maintain the position of the target virtual object unchanged in response to the first time interval being less than the second time interval.
In one possible implementation, the apparatus further includes:
and the first operation execution module is used for responding to the movement control of the target virtual object in a first time interval and the position of the target virtual object is not changed in the first time interval, and controlling the target virtual object to execute a first specified operation.
In one possible implementation, the first specified operation includes at least one of a jump operation and an attack operation.
In one possible implementation, the first operation execution module includes:
a second key frame acquisition unit, configured to acquire second key frame data in response to that movement control is performed on the target virtual object within a second time interval, and a position of the target virtual object does not change within the first time interval, where the second time interval is a last time interval in which the target virtual object is controlled to perform a first specified operation; the second key frame data is data of which the position indicated by the operation position information is closest to the current position of the target virtual object in each key frame data;
a third position moving unit, configured to control the target virtual object to move to a third position based on the second key frame data, where the third position is a position indicated by the operation position information in the second key frame data.
In a possible implementation manner, the user operation data is used for controlling the target virtual object to perform at least one of a jump operation, a move operation, an attack operation, and a skill release operation.
In one possible implementation, the apparatus further includes:
and the performance data output module is used for outputting the performance data of the terminal in the process of operating the virtual scene.
In summary, in the solution shown in the embodiment of the present application, each piece of key frame data is generated according to a control operation of a user on a virtual object, and when a second position indicated by operation position information of the first key frame data and a first position indicated by object position information of the virtual object satisfy a specified condition, the target virtual object may be controlled to execute an operation corresponding to the user operation data according to the user operation data in the first key frame data. According to the scheme, the key frame data are generated according to the control operation of the user on the virtual object, the target virtual object is controlled through the user operation data based on the object position information of the target virtual object and the operation position information of the key frame data, when the control target virtual object executes the operation corresponding to the user operation data, the position information of the target virtual object is considered, and the accuracy of virtual object control is improved.
Fig. 12 is a block diagram illustrating the structure of a computer device 1200 according to an example embodiment. The computer device 1200 may be a terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1200 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement the object hinting method in a virtual scene provided by method embodiments herein.
In some embodiments, the computer device 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, display 1205, camera assembly 1206, audio circuitry 1207, and power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the computer device 1200; in other embodiments, the display 1205 may be at least two, respectively disposed on different surfaces of the computer device 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The power supply 1209 is used to power the various components in the computer device 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 may detect magnitudes of accelerations on three coordinate axes of a coordinate system established with the computer apparatus 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the display screen 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the computer device 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the computer device 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on the side bezel of computer device 1200 and/or underlying display 1205. When the pressure sensor 1213 is disposed on the side frame of the computer device 1200, the holding signal of the user to the computer device 1200 can be detected, and the processor 1201 performs left-right hand recognition or quick operation according to the holding signal acquired by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display luminance of the display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also called a distance sensor, is generally provided on a front panel of the computer apparatus 1200. The proximity sensor 1216 is used to collect the distance between the user and the front of the computer device 1200. In one embodiment, the processor 1201 controls the display screen 1205 to switch from the bright screen state to the dark screen state when the proximity sensor 1216 detects that the distance between the user and the front of the computer device 1200 is gradually decreasing; when the proximity sensor 1216 detects that the distance between the user and the front of the computer device 1200 is gradually increased, the display 1205 is controlled by the processor 1201 to switch from the rest state to the bright state.
Those skilled in the art will appreciate that the configuration shown in FIG. 12 is not intended to be limiting of the computer device 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as a memory comprising computer programs (instructions), executable by a processor of a computer device to perform the methods shown in the various embodiments of the present application, is also provided. For example, the non-transitory computer readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods shown in the various embodiments described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A virtual object control method, characterized in that the method is executed by a terminal, the method comprising:
displaying a scene picture of a virtual scene, wherein the scene picture comprises a target virtual object;
acquiring object position information, wherein the object position information is used for indicating the position of the target virtual object in the virtual scene;
acquiring first key frame data from each key frame data, wherein each key frame data is generated respectively based on each control operation of a user on a virtual object in the virtual scene; each key frame data comprises user operation data of corresponding control operation and operation position information; the operation position information is used for indicating the position of a corresponding virtual object in the virtual scene when corresponding control operation occurs;
in response to the first position and the second position meeting a specified condition, controlling the target virtual object based on user operation data in the first key frame data; the first position is a position indicated by the object position information, and the second position is a position indicated by the operation position information in the first key frame data.
2. The method according to claim 1, wherein before controlling the target virtual object based on the user operation data in the first key frame data in response to the first position and the second position satisfying a specified condition, further comprising:
obtaining a location distance, the location distance being a distance between the first location and the second location;
determining that the first location and the second location satisfy the specified condition in response to the location distance being less than a distance threshold.
3. The method of claim 2, further comprising:
in response to the location distance not being less than the distance threshold, controlling the target virtual object to move to the second location.
4. The method of claim 3, wherein said controlling the target virtual object to move to the second location in response to the location distance not being less than the distance threshold comprises:
acquiring a relative position relation between the first position and the second position in response to the position distance not being less than the distance threshold;
acquiring a displacement control parameter based on the relative position relation;
controlling the target virtual object to move to the second position based on the displacement control parameter.
5. The method of claim 1, wherein the key frame data further comprises a time parameter; the time parameter is used for indicating the time when the corresponding control operation occurs;
the controlling the target virtual object based on the user operation data in the first key frame data in response to the first position and the second position satisfying a specified condition includes:
acquiring a first time interval and a second time interval; the first time interval is an interval between the acquisition time of the object position information and the time of controlling the target virtual object based on the user operation data in the key frame data last time; the second time interval is a time interval between the first control operation and the second control operation; the first control operation is a control operation corresponding to the first key frame data, and the second control operation is a previous control operation of the first control operation in each control operation;
in response to the first time interval not being less than the second time interval and the first location and the second location satisfying the specified condition, controlling the target virtual object based on user operation data in the first key frame data.
6. The method of claim 5, further comprising:
in response to the first time interval being less than the second time interval, maintaining the position of the target virtual object unchanged.
7. The method of claim 1, further comprising:
and in response to the movement control of the target virtual object in a first time interval and the position of the target virtual object is not changed in the first time interval, controlling the target virtual object to execute a first specified operation.
8. The method of claim 7, wherein the first specified operation comprises at least one of a jump operation and an attack operation.
9. The method of claim 7, further comprising:
in response to that the movement control is executed on the target virtual object in a second time interval, and the position of the target virtual object is not changed in the first time interval, acquiring second key frame data, wherein the second time interval is the last time interval for controlling the target virtual object to execute a first specified operation; the second key frame data is data of which the position indicated by the operation position information is closest to the current position of the target virtual object in each key frame data;
controlling the target virtual object to move to a third position based on the second key frame data, the third position being a position indicated by the operation position information in the second key frame data.
10. The method of claim 1, wherein the user manipulation data is used to control the target virtual object to perform at least one of a jump manipulation, a move manipulation, an attack manipulation, and a skill release manipulation.
11. The method according to any one of claims 1 to 10, further comprising:
and outputting the performance data of the terminal in the process of operating the virtual scene.
12. A virtual object control method, characterized in that the method is executed by a terminal, the method comprising:
displaying a scene picture of a virtual scene, wherein the scene picture comprises a target virtual object;
in response to the target virtual object being located at a first position in the virtual scene and the first position and the second position meeting a specified condition, controlling the target virtual object based on user operation data in first key frame data; the first key frame data are respectively generated based on control operation of a user on a virtual object in the virtual scene; the second position is a position of the virtual object in the virtual scene when the control operation in the first key frame data occurs.
13. A virtual object control apparatus, wherein the apparatus is used for a terminal, the apparatus comprising:
the scene picture display module is used for displaying a scene picture of a virtual scene, and the scene picture comprises a target virtual object;
an object position obtaining module, configured to obtain object position information, where the object position information is used to indicate a position of the target virtual object in the virtual scene;
the first key frame acquisition module is used for acquiring first key frame data from each key frame data, and each key frame data is generated respectively based on each control operation of a user on a virtual object in the virtual scene; each key frame data comprises user operation data of corresponding control operation and operation position information; the operation position information is used for indicating the position of a corresponding virtual object in the virtual scene when corresponding control operation occurs;
the virtual object control module is used for responding that the first position and the second position meet a specified condition, and controlling the target virtual object based on the user operation data in the first key frame data; the first position is a position indicated by the object position information, and the second position is a position indicated by the operation position information in the first key frame data.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual object control method according to any one of claims 1 to 12.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a virtual object control method according to any one of claims 1 to 12.
CN202011287725.5A 2020-11-17 2020-11-17 Virtual object control method, device, equipment and storage medium Active CN112245921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011287725.5A CN112245921B (en) 2020-11-17 2020-11-17 Virtual object control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011287725.5A CN112245921B (en) 2020-11-17 2020-11-17 Virtual object control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112245921A CN112245921A (en) 2021-01-22
CN112245921B true CN112245921B (en) 2022-04-15

Family

ID=74266037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011287725.5A Active CN112245921B (en) 2020-11-17 2020-11-17 Virtual object control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112245921B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113018853B (en) * 2021-04-21 2022-11-18 腾讯科技(深圳)有限公司 Data processing method, data processing device, computer equipment and storage medium
CN112990236B (en) * 2021-05-10 2021-08-31 腾讯科技(深圳)有限公司 Data processing method and related device
CN114519779B (en) * 2022-04-20 2022-06-28 腾讯科技(深圳)有限公司 Motion generation model training method, device, equipment and storage medium
CN115185639B (en) * 2022-07-12 2023-06-23 安超云软件有限公司 Method and system for realizing virtualized API (application program interface)
CN116999840B (en) * 2023-08-29 2024-04-02 深圳灿和兄弟网络科技有限公司 Virtual object moving method and device based on data analysis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799428A (en) * 2012-06-28 2012-11-28 北京大学 Operation recording and playback method for interactive software
CN110325965B (en) * 2018-01-25 2021-01-01 腾讯科技(深圳)有限公司 Object processing method, device and storage medium in virtual scene
CN109771948A (en) * 2019-01-10 2019-05-21 珠海金山网络游戏科技有限公司 A kind of integrated online game operational order system and method
US11253783B2 (en) * 2019-01-24 2022-02-22 Kabushiki Kaisha Ubitus Method for training AI bot in computer game
CN110898427B (en) * 2019-11-26 2023-11-03 上海米哈游网络科技股份有限公司 Game playback method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112245921A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112245921B (en) Virtual object control method, device, equipment and storage medium
CN109445662B (en) Operation control method and device for virtual object, electronic equipment and storage medium
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN110721469B (en) Method, terminal and medium for shielding virtual object in virtual environment
CN111744184B (en) Control showing method in virtual scene, computer equipment and storage medium
CN111659117B (en) Virtual object display method and device, computer equipment and storage medium
CN110585710A (en) Interactive property control method, device, terminal and storage medium
CN112569596B (en) Video picture display method and device, computer equipment and storage medium
CN112138383B (en) Virtual item display method, device, equipment and storage medium
CN110860087B (en) Virtual object control method, device and storage medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111744185A (en) Virtual object control method and device, computer equipment and storage medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112451969A (en) Virtual object control method and device, computer equipment and storage medium
CN113713382A (en) Virtual prop control method and device, computer equipment and storage medium
CN114404972A (en) Method, device and equipment for displaying visual field picture
CN113713383A (en) Throwing prop control method and device, computer equipment and storage medium
CN111589102B (en) Auxiliary tool detection method, device, equipment and storage medium
CN111265867B (en) Method and device for displaying game picture, terminal and storage medium
CN111659122A (en) Virtual resource display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037806

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant