CN114037783A - Animation playing method, device, equipment, readable storage medium and program product - Google Patents

Animation playing method, device, equipment, readable storage medium and program product Download PDF

Info

Publication number
CN114037783A
CN114037783A CN202111628772.6A CN202111628772A CN114037783A CN 114037783 A CN114037783 A CN 114037783A CN 202111628772 A CN202111628772 A CN 202111628772A CN 114037783 A CN114037783 A CN 114037783A
Authority
CN
China
Prior art keywords
animation
virtual
virtual object
target
displacement vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111628772.6A
Other languages
Chinese (zh)
Inventor
晏嘉庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN114037783A publication Critical patent/CN114037783A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/307Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying an additional window with a view from the top of the game field, e.g. radar screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an animation playing method, device, equipment, readable storage medium and program product, and relates to the field of interface interaction. The method comprises the following steps: receiving a control operation on the virtual object, and controlling the virtual object to be adjusted from a first form holding the virtual prop to a second form; acquiring a first displacement vector corresponding to the first form and the second form based on the posture of the virtual object; acquiring a second displacement vector corresponding to the first form and the second form in the reference animation resource; and obtaining animation data to play based on the scaling corresponding to the first displacement vector and the second displacement vector. The reference animation resources are animation resources corresponding to the group of virtual props, and when the target action is executed, the virtual object holds any prop in the group of virtual props, scaling adjustment can be carried out by adopting the reference animation resources, so that animation rendering and playing are realized, the animation configuration efficiency corresponding to the target action is improved, and the resource occupation amount and the calculation amount of equipment are reduced.

Description

Animation playing method, device, equipment, readable storage medium and program product
The present application claims priority of chinese patent application No. 202111204137.5 entitled "animation playback method, apparatus, device, readable storage medium, and program product," filed 10/15/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of interface interaction, and in particular, to a method, an apparatus, a device, a readable storage medium, and a program product for playing an animation.
Background
In gaming applications or some virtual environment-based applications, players are typically able to control virtual objects to perform a variety of actions in the virtual environment, such as: a shooter open operation, a shooter close operation, a running operation, and the like, wherein the shooter open operation is an operation in which the virtual object holding the virtual shooter is switched from a waist shooting state to a sighting state.
In the related art, since various parameters such as a gun length and a handle position are different for different virtual firearms, developers need to configure different pieces of open-mirror operation data or close-mirror operation data for different virtual firearms.
However, since the variety of virtual firearms is large and different firearm accessories may affect the firearm parameters, the configuration of the mirror opening/closing motion data for each virtual firearm and each corresponding firearm configuration is required by developers, and the configuration efficiency of the mirror opening/closing animation is low.
Disclosure of Invention
The embodiment of the application provides an animation playing method, device and equipment, a readable storage medium and a program product, which can improve the efficiency in an animation configuration process. The technical scheme is as follows:
in one aspect, an animation playing method is provided, and the method includes:
receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
acquiring a first displacement vector corresponding to the first form and the second form based on the pose of the virtual object;
acquiring a second displacement vector corresponding to the first form and the second form in a reference animation resource, wherein the reference animation resource is an adaptive animation resource corresponding to the prop type of the virtual prop and the target action, the reference animation resource further comprises intermediate animation data of the target action, and the intermediate animation data is used for indicating the action process of the target action;
adjusting the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector to obtain animation data of the virtual object executing the target action;
playing the animation of the virtual object executing the target action based on the animation data.
In another aspect, there is provided an animation playback apparatus, including:
the receiving module is used for receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
an obtaining module configured to obtain a first displacement vector corresponding to the first form and the second form based on a pose of the virtual object;
the obtaining module is further configured to obtain a second displacement vector corresponding to the first form and the second form in a reference animation resource, where the reference animation resource is an adaptive animation resource corresponding to the prop type of the virtual prop and the target action, the reference animation resource further includes intermediate animation data of the target action, and the intermediate animation data is used to indicate an action process of the target action;
the playing module is used for adjusting the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector to obtain animation data of the virtual object executing the target action; playing the animation of the virtual object executing the target action based on the animation data.
In another aspect, an animation playing method is provided, and the method includes:
receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
displaying an animation of the virtual object executing the target action based on the control operation;
in the execution process of the target action, the target body part of the virtual object changes along a target change path, the target change path is obtained by scaling adjustment on the basis of a reference change path, and the scaling corresponds to the virtual prop held by the virtual object.
In another aspect, there is provided an animation playback apparatus, including:
the receiving module is used for receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
the display module is used for displaying the animation of the virtual object executing the target action based on the control operation;
in the execution process of the target action, the target body part of the virtual object changes along a target change path, the target change path is obtained by scaling adjustment on the basis of a reference change path, and the scaling corresponds to the virtual prop held by the virtual object.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the animation playing method according to any one of the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the animation playing method as described in any one of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to make the computer device execute the animation playing method in any one of the above embodiments.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
because the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, that is, the reference animation resource is an animation resource corresponding to a group of virtual props, when the target action is executed, the virtual object can adopt the reference animation resource to perform scaling adjustment when holding any prop in the group of virtual props, so that animation rendering and playing are realized, animation configuration efficiency corresponding to the target action is improved, and resource occupation and equipment calculation amount are reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a related art open mirror animation configuration provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of an open-mirror animation configuration method provided by an exemplary embodiment of the present application;
fig. 3 is a block diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 5 is a flow chart of an animation playback method provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a first displacement vector determination method provided based on the embodiment shown in FIG. 5;
FIG. 7 is a flowchart of an animation playback method provided by another exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of the open mirror process provided based on the embodiment shown in FIG. 7;
FIG. 9 is a schematic diagram of a reference open-mirror animation resource provided based on the embodiment shown in FIG. 7;
FIG. 10 is a flow chart of an animation playback method provided by another exemplary embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an overall process of open mirror animation provided by an exemplary embodiment of the present application;
FIG. 12 is a flowchart of an animation playback method provided by another exemplary embodiment of the present application;
fig. 13 is a block diagram of a structure of an animation playback device according to an exemplary embodiment of the present application;
fig. 14 is a block diagram of an animation playback device according to another exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In gaming applications or some virtual environment-based applications, players are typically able to control virtual objects to perform a variety of actions in the virtual environment, such as: a shooter open operation, which is an operation in which the virtual object holds the virtual shooter and is switched from a waist shooting state to a sighting state by the sighting telescope, and a shooter close operation. In the embodiment of the application, the virtual firearms, the virtual bullets, the sighting telescope and the like are virtual props in the game application program.
Taking a firearm opening mirror as an example for illustration, fig. 1 is a schematic diagram of a configuration of a related art opening mirror animation provided in an exemplary embodiment of the present application. As shown in fig. 1, in the related art, when configuring the opening action of the virtual firearms, the virtual firearms are implemented by configuring the virtual firearms one by one, for example: the virtual firearm 110 is correspondingly configured with an open mirror animation 111; the virtual shooter 120 is provided with a mirror opening animation 121 and the like. However, in the above schematic diagram, since the variety of virtual firearms is large and different firearms accessories may affect the firearms parameters, the developer needs to configure the data of the mirror opening/closing motion for each virtual firearms and each corresponding configuration of firearms, and the configuration efficiency of the mirror opening/closing animation is low.
FIG. 2 is a schematic diagram of an open-mirror animation configuration method according to an exemplary embodiment of the application. As shown in fig. 2, first position point coordinates 211 of the current virtual object hand are obtained, and second position point coordinates 212 of the virtual object hand after the mirror is opened are determined according to the virtual firearm held by the virtual object, so as to obtain a first displacement vector 210 between the first position point coordinates and the second position point coordinates.
And acquiring a starting position point coordinate 221 and an ending position point coordinate 222 in the reference open mirror animation resource, and determining a second displacement vector 220 by the individual according to the starting position point coordinate 221 and the ending position point coordinate 222.
And determining the scaling of the open mirror animation resource according to the first displacement vector 210 and the second displacement vector 220, so as to scale and adjust the hand position in the open mirror animation resource, and obtain the open mirror animation corresponding to the virtual firearm held by the current virtual object.
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with an application program supporting a virtual environment, such as an application program supporting a three-dimensional virtual environment. The application program may be any one of a virtual reality application program, a three-dimensional map program, a Third-Person Shooting game (FPS), a First-Person Shooting game (FPS), and a Multiplayer Online Battle sports game (MOBA). Alternatively, the application program may be a stand-alone application program, such as a stand-alone three-dimensional game program, or may be a network online application program.
Fig. 3 shows a block diagram of an electronic device according to an exemplary embodiment of the present application. The electronic device 300 includes: an operating system 320 and application programs 322.
Operating system 320 is the base software that provides applications 322 with secure access to computer hardware.
Application 322 is an application that supports a virtual environment. Optionally, application 322 is an application that supports a three-dimensional virtual environment. The application 322 may be any one of a virtual reality application, a three-dimensional map program, a TPS game, an FPS game, an MOBA game, and a multi-player gun battle type live game. The application 322 may be a stand-alone application, such as a stand-alone three-dimensional game program, or may be a network-connected application.
Fig. 4 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 400 includes: a first device 420, a server 440, and a second device 460.
The first device 420 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a TPS game, an FPS game, an MOBA game and a multi-player gunfight survival game. The first device 420 is a device used by a first user who uses the first device 420 to control a first virtual object located in a virtual environment for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The first device 420 is connected to the server 440 through a wireless network or a wired network.
The server 440 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 440 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, server 440 undertakes primary computing work and first device 420 and second device 460 undertakes secondary computing work; alternatively, server 440 undertakes secondary computing work and first device 420 and second device 460 undertakes primary computing work; alternatively, the server 440, the first device 420, and the second device 460 perform cooperative computing by using a distributed computing architecture.
The second device 460 is installed and operated with an application program supporting a virtual environment. The application program may be any one of a virtual reality application program, a three-dimensional map program, an FPS game, an MOBA game, and a multi-player gunfight type live game. The second device 460 is a device used by a second user who uses the second device 460 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the second virtual object is a second virtual character, such as a simulated character or an animated character.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual character and the second virtual character may belong to different teams, different organizations, or two groups with enemy.
Alternatively, the applications installed on the first device 420 and the second device 460 are the same, or the applications installed on the two devices are the same type of application for different control system platforms. The first device 420 may generally refer to one of a plurality of devices, and the second device 460 may generally refer to one of a plurality of devices, and this embodiment is illustrated by the first device 420 and the second device 460. The device types of the first device 420 and the second device 460 are the same or different, and include: at least one of a game console, a desktop computer, a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The following embodiments are illustrated where the device is a desktop computer.
Those skilled in the art will appreciate that the number of devices described above may be greater or fewer. For example, the number of the devices may be only one, or several tens or hundreds, or more. The number and the type of the devices are not limited in the embodiments of the present application.
It should be noted that the server 440 may be implemented as a physical server, or may also be implemented as a Cloud server, where Cloud technology (Cloud technology) refers to a hosting technology for unifying serial resources such as hardware, software, and network in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
In some embodiments, the method provided by the embodiment of the application can be applied to a cloud game scene, so that the data logic calculation in the game process is completed through the cloud server, and the terminal is responsible for displaying the game interface.
In some embodiments, the server 440 described above may also be implemented as a node in a blockchain system. The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. The block chain, which is essentially a decentralized database, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
Referring to fig. 5, it shows a flowchart of an animation playing method provided in an exemplary embodiment of the present application, and taking the method applied to a terminal as an example for explanation, as shown in fig. 5, the method includes:
step 501, receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action.
The target action is used for controlling the virtual object to be adjusted from a first form holding the virtual prop to a second form. The first form is used for representing the form of the virtual object before the target action is executed, and the second form is used for representing the form of the virtual object after the target action is executed. The first form and the second form are mainly used for indicating the representation form of the virtual object, namely, the state of the virtual object.
In some embodiments, the target action comprises at least one of:
first, the mirror opening operation is an operation in which the virtual object holds a virtual gun and is adjusted from a waist shooting state to a state observed through the sighting telescope, and the mirror closing operation is an operation in which the virtual object holds a virtual gun and is adjusted from a state observed through the sighting telescope to a waist shooting state.
The first form corresponding to the open mirror action is the waist shooting state that the virtual object holds the virtual gun, and the second form corresponding to the open mirror action is the state that the virtual object holds the virtual gun and is observed through the sighting telescope.
Secondly, the throwing motion is a motion in which the virtual object throws a held throwing object, and since different throwing objects have different weights, the variation range of the throwing motion at the time of throwing is also different.
The first form corresponding to the throwing motion is a state that the hand of the virtual object stores power backwards, and the second form corresponding to the throwing motion is a state that the hand of the virtual object releases the throwing prop.
It should be noted that the implementation manner of the target action is only an illustrative example, and the present embodiment does not limit the action type of the target action.
In the embodiments of the present application, the target operation is realized as the mirror-on/mirror-off operation as an example.
Illustratively, taking open mirror as an example, an open mirror control operation is received, and the open mirror control operation is used for controlling the virtual object to execute an open mirror action on the virtual gun. Before opening the mirror, the virtual environment is observed at the first or third person-named visual angle of the virtual object, and after opening the mirror, the virtual environment is observed through a sighting telescope matching piece assembled on a virtual gun held by the virtual object.
Step 502, a first displacement vector corresponding to the first form and the second form is obtained based on the pose of the virtual object.
The pose of the virtual object is used to indicate where the body part of the virtual object is located, and in some embodiments the pose of the virtual object is used to indicate where the specified body part of the virtual object is located, such as: the pose of the virtual object is used to indicate the position where the hand of the virtual object is located; alternatively, the pose of the virtual object is used to indicate the positions at which at least two body parts of the virtual object are located, such as: the pose of the virtual object is used to indicate the position where the hands and head of the virtual object are located.
In this embodiment of the application, the gesture of the virtual object is used to indicate the position point coordinates where the target body position of the virtual object is located, and taking the mirror-opening motion as an example, the gesture of the virtual object is used to indicate the position point coordinates where the hand position of the virtual object is located in the virtual environment. Optionally, based on the posture of the virtual object, a first displacement vector corresponding to the target body position of the virtual object in the first form and the second form is obtained. In this case, when the first displacement vector is determined, the first displacement vector corresponding to the hand position of the virtual object in the first form and the second form is obtained.
The first form is a form of the virtual object before the target action is executed, namely the first form is a current form, first position point coordinates of a target body position of the virtual object in the current first form are obtained, a target adjusting position of the virtual prop is determined, second position point coordinates of the target body position of the virtual object in the second form are determined based on the target adjusting position, and a difference between the second position point coordinates and the first position point coordinates is used as a first displacement vector.
The virtual prop is used as a virtual gun, the target movement is used as a mirror opening movement as an example, the target adjusting position is the position where the virtual gun is located after the mirror opening, when the target adjusting position is determined, after the virtual gun is firstly determined to be mirror opened, the coincidence position of the sighting telescope centroid and the observation range central point of the virtual object is determined, and the coincidence position of the sighting telescope centroid and the observation range central point is used as the target adjusting position of the virtual gun.
That is, when the target action is implemented as an open mirror action, in the process of determining the first displacement vector, the first position point coordinate corresponding to the first form is directly determined according to the hand position of the current virtual object; and the second position point coordinate corresponding to the second form is determined according to the coincidence position of the sighting telescope centroid and the observation range central point after the virtual gun is opened, and the position of the hand of the virtual object when the virtual gun is at the coincidence position is obtained by backward pushing. The first position point coordinate and the second position point coordinate are determined by taking a world coordinate system as a reference; or the first position point coordinate and the second position point coordinate are determined based on the coordinate system corresponding to the virtual object.
Schematically, according to the position of the current virtual object in the virtual environment, the coordinates of a first position point of the current hand position are determined by taking a world coordinate system as a reference; and determining a target adjusting position of the virtual firearm by taking the coincidence of the sighting center of the sighting telescope and the center of the observation range as a target, and determining a second position point coordinate of the hand position by taking a world coordinate system as a reference according to the target adjusting position. That is, first position point coordinates of the hand of the virtual object before the current mirror opening are acquired, and second position point coordinates of the hand of the virtual object after the mirror opening are determined based on the target adjustment position. And subtracting the first position point coordinate from the second position point coordinate to obtain a first displacement vector.
Schematically, referring to fig. 6, a virtual object 600 is included in the virtual environment, and when the virtual object 600 is shot in a waist shooting state, the virtual object 600 is in a first form in which hand position coordinates of the virtual object 600 are determined; when the virtual object 600 is shot in the open state, the virtual object 600 is in the second state, the sighting telescope center of the virtual gun is overlapped with the observation range center of the virtual object to the virtual environment, the position of the virtual gun is obtained, the hand position coordinate of the virtual object 600 under the position of the virtual gun is obtained through inference, and the difference between the two hand position coordinates is determined as the first displacement vector.
The first displacement amount represents the hand movement amplitude that the virtual object needs to achieve to perform the open mirror action on the current virtual firearm.
Step 503, obtaining a second displacement vector corresponding to the first form and the second form in the reference animation resource.
The reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, and the reference animation resource also comprises intermediate animation data of the target action, wherein the intermediate animation data is used for indicating the action process of the target action.
The reference animation resources are set for the same type of virtual prop, or set for a group of virtual props in the same type of virtual props.
Illustratively, virtual firearms share the same reference animation resource; or the virtual gun comprises a virtual rifle, a virtual sniper gun and a virtual pistol, wherein the virtual rifle shares the same reference animation resource, the virtual sniper gun shares the same reference animation resource, and the virtual pistol shares the same reference animation resource; or clustering the virtual firearms into at least two firearm clusters according to the characteristics of the firearms, wherein the virtual firearms in each firearm cluster share the same reference animation resource, which is not limited in the embodiment of the application.
The reference animation resources comprise preset animation resources corresponding to target actions, and coordinate data corresponding to the target body position of the virtual object are included. Optionally, taking the open mirror motion as an example, the reference animation resource includes a segment of animation frame sequence in the open mirror process, each frame includes data (displacement, rotation, scaling, etc.) of each skeletal point of the virtual object, and usually, the game engine will press the animation frame to play the animation, thereby making the motion expression.
Namely, hand skeleton point data corresponding to a first frame of animation frame and hand skeleton point data corresponding to a last frame of animation frame in the reference animation resource are obtained, and a second displacement vector is determined according to the difference between the two hand skeleton point data. Wherein the second displacement vector represents the hand movement amplitude of the mirror-opening action in the reference animation resource.
Step 504, the intermediate animation data is adjusted based on the scaling corresponding to the first displacement vector and the second displacement vector, and animation data of the virtual object executing the target action is obtained.
The first displacement vector represents a movement amplitude required by the virtual object to execute the target action, and the second displacement vector represents a movement amplitude configured in the reference animation resource, that is, a scaling ratio determined according to the first displacement vector and the second displacement vector is used for indicating a coordinate adjustment ratio in the reference animation resource, and after the animation data in the reference animation resource is subjected to coordinate adjustment according to the adjustment ratio, the adaptive animation data of the current virtual object to execute the target action on the virtual prop is obtained.
And 505, playing the animation of the virtual object execution target action based on the animation data.
In summary, according to the animation playing method provided in the embodiment of the present application, since the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, that is, the reference animation resource is an animation resource corresponding to a group of virtual props, when the target action is executed, the virtual object can perform scaling adjustment by using the reference animation resource when holding any prop of the group of virtual props, so that animation rendering and playing are implemented, animation configuration efficiency corresponding to the target action is improved, and resource occupation and device calculation amount are reduced.
In an alternative embodiment, the reference animation resources are determined according to the prop type of the virtual props. Fig. 7 is a flowchart of an animation playing method according to another exemplary embodiment of the present application, which is described by taking an example that the method is applied to a terminal, and as shown in fig. 7, the method includes:
step 701, receiving a control operation on a virtual object, where the control operation is used to control the virtual object to execute a target action.
The target action is used for controlling the virtual object to be adjusted from a first form holding the virtual prop to a second form.
Illustratively, taking open mirror as an example, an open mirror control operation is received, and the open mirror control operation is used for controlling the virtual object to execute an open mirror action on the virtual gun.
Step 702 acquires a first displacement vector corresponding to the first form and the second form based on the pose of the virtual object.
The first form is a form of the virtual object before executing the target action, namely the first form is a current form, first position point coordinates of a target body position of the virtual object in the current first form are obtained, a target adjusting position of the virtual prop is determined, second position point coordinates of the target body position of the virtual object in the second form are determined based on the target adjusting position, and a difference between the second position point coordinates and the first position point coordinates is used as a first displacement vector.
The first displacement amount represents the hand movement amplitude that the virtual object needs to achieve to perform the open mirror action on the current virtual firearm.
Optionally, taking the mirror-opening movement as an example, the hand position V1 when the virtual firearm of the current equipment is shot at waist and the hand position V2 when aiming are obtained when the virtual environment is running, and V3 is V2-V1, where V3 is the first displacement vector.
Schematically, as shown in fig. 8, which shows a schematic view of the opening process provided in an exemplary embodiment of the present application, a virtual gun 810 is held by a virtual object 800, the hand position coordinates of the virtual gun 810 before opening the mirror when the virtual object 800 is waisted are V1(50, 50, 50), the hand position coordinates after opening the mirror are V2(25, 25, 75), and then a first displacement vector V3(-25, -25, 25) ═ V2-V1 can be calculated from the hand position coordinates when the virtual gun is waisted and the hand position coordinates after opening the mirror.
Step 703, determining the prop type of the virtual prop.
In some embodiments, when the item type of the virtual item is determined, the functional classification type to which the virtual item belongs is determined; or determining the effect class subdivision type to which the virtual prop belongs. Wherein, the functional classification type is used for indicating the prop function realized by the virtual prop, such as: attack function, shield function, etc.; the effect class subdivision type is used for indicating the next level of classification of the virtual prop in the functional classification.
Illustratively, when determining the functional classification type to which the virtual prop belongs, such as: determining that the virtual prop belongs to any one of a virtual gun, a virtual throwing prop and a virtual accessory; when the subdivision type to which the virtual prop belongs is determined, taking a virtual gun as an example, the following steps are performed: and determining that the virtual prop belongs to any one of a virtual rifle, a virtual sniper gun and a virtual pistol.
Step 704, obtaining a reference animation resource corresponding to the prop type and the target action.
The reference animation resource is an animation resource preset corresponding to the prop type and the action type.
Alternatively, the reference animation resource is an overlay type animation resource, and the skeleton data in the overlay type animation resource is data obtained by subtracting the first frame of the animation from each frame of the animation provided by the art, and the part of the data is generally called overlay data or delta data.
The reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, and the reference animation resource also comprises intermediate animation data of the target action, wherein the intermediate animation data is used for indicating the action process of the target action.
That is, the reference animation resource is suitable for the virtual item belonging to the item type and the process of executing the target action, that is, when any virtual item belonging to the item type is held by the line object and the target action is executed, animation rendering can be performed on the basis of the reference animation resource.
In some embodiments, if the reference animation resource is a resource corresponding to the functional classification type of the virtual item, determining the functional classification type of the virtual item; or, if the reference animation resource is a resource corresponding to the effect class subdivision type of the virtual prop, determining the effect class subdivision type of the virtual prop.
Optionally, taking the example that the virtual prop includes a virtual gun, acquiring a reference animation resource corresponding to the virtual gun and the target action; alternatively, a reference animation resource corresponding to the type of the virtual firearm and the target motion is acquired. In some embodiments, a reference animation resource is obtained that corresponds to a firearm type of the virtual firearm, a firearm accessory parameter of the virtual firearm, and the target action.
Step 705, obtaining the initial position point coordinates and the end position point coordinates of the target body position of the virtual object in the reference animation resources.
Illustratively, taking the open mirror motion as an example, the position a1 of the first frame hand and the position a2 of the last frame hand in the reference open mirror motion image resource are acquired at the time of running the virtual environment.
Wherein the start position point coordinates and the end position point coordinates of the target body position are relative to a world coordinate system in the virtual environment; alternatively, the start position point coordinates and the end position point coordinates of the target body position are relative to an object coordinate system corresponding to the virtual object.
Step 706, the difference between the ending position point coordinates and the starting position point coordinates is used as a second displacement vector.
Taking the above-described open mirror animation as an example, after the position a1 of the first frame hand and the position a2 of the last frame hand in the reference open mirror animation resource are acquired, A3 is calculated to be a2-a1, where A3 represents the above-described second displacement vector.
Schematically, as shown in fig. 9, which shows a schematic diagram of a reference open mirror animation resource provided in an exemplary embodiment of the present application, a hand position of a virtual object moves from a start position point 910 to an end position point 920 during the open mirror process, where a hand position coordinate of the start position point 910 is a1(40, 40, 40), and a hand position coordinate of the end position point 920 is a2(30, 30, 45), and then a second displacement vector is A3(-10, -10, 5) ═ a2-a1 can be calculated according to the hand position coordinate of the start position point 910 and the hand position coordinate of the end position point 920.
Wherein the second displacement vector represents the hand movement amplitude of the mirror-opening action in the reference animation resource.
And 707, adjusting the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector to obtain animation data of the virtual object executing the target motion, and playing the animation data.
The first displacement vector represents a movement amplitude required by the virtual object to execute the target action, and the second displacement vector represents a movement amplitude configured in the reference animation resource, that is, a scaling ratio determined according to the first displacement vector and the second displacement vector is used for indicating a coordinate adjustment ratio in the reference animation resource, and after the animation data in the reference animation resource is subjected to coordinate adjustment according to the adjustment ratio, the adaptive animation data of the current virtual object to execute the target action on the virtual prop is obtained.
In summary, according to the animation playing method provided in the embodiment of the present application, since the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, that is, the reference animation resource is an animation resource corresponding to a group of virtual props, when the target action is executed, the virtual object can perform scaling adjustment by using the reference animation resource when holding any prop of the group of virtual props, so that animation rendering and playing are implemented, animation configuration efficiency corresponding to the target action is improved, and resource occupation and device calculation amount are reduced.
According to the method provided by the embodiment, the reference animation resource is determined according to the item type of the virtual item, namely the reference animation resource can be applied to any virtual item conforming to the item type of the virtual item, so that the condition that the animation resource needs to be independently configured for each virtual item is avoided, the resource consumption in the animation resource configuration process is reduced, the animation resource configuration efficiency is improved, and the animation resource configuration accuracy is improved.
In an alternative embodiment, scaling is used to multiply the intermediate animation data. Fig. 10 is a flowchart of an animation playing method according to another exemplary embodiment of the present application, which is described by taking an example that the method is applied to a terminal, and as shown in fig. 10, the method includes:
step 1001, receiving a control operation on a virtual object, where the control operation is used to control the virtual object to execute a target action.
The target action is used for controlling the virtual object to be adjusted from a first form holding the virtual prop to a second form.
Illustratively, taking open mirror as an example, an open mirror control operation is received, and the open mirror control operation is used for controlling the virtual object to execute an open mirror action on the virtual gun.
Step 1002, a first displacement vector corresponding to the first form and the second form is obtained based on the pose of the virtual object.
The first form is a form of the virtual object before executing the target action, namely the first form is a current form, first position point coordinates of a target body position of the virtual object in the current first form are obtained, a target adjusting position of the virtual prop is determined, second position point coordinates of the target body position of the virtual object in the second form are determined based on the target adjusting position, and a difference between the second position point coordinates and the first position point coordinates is used as a first displacement vector.
The first displacement amount represents the hand movement amplitude that the virtual object needs to achieve to perform the open mirror action on the current virtual firearm.
Step 1003, obtaining a second displacement vector corresponding to the first form and the second form in the reference animation resource.
The reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, and the reference animation resource also comprises intermediate animation data of the target action, wherein the intermediate animation data is used for indicating the action process of the target action.
The reference animation resources are set for the same type of virtual prop, or set for a group of virtual props in the same type of virtual props.
Taking open mirror motion as an example, hand skeleton point data corresponding to a first frame of animation frame and hand skeleton point data corresponding to a last frame of animation frame in the reference animation resource are obtained, and a second displacement vector is determined according to the difference between the two hand skeleton point data. Wherein the second displacement vector represents the hand movement amplitude of the mirror-opening action in the reference animation resource.
Step 1004 determines a scaling based on a ratio between the first displacement vector and the second displacement vector.
Illustratively, taking the first displacement vector V3(-25, -25, 25) and the second displacement vector A3(-10, -10, 5) as an example, the ratio between the first displacement vector and the second displacement vector is determined as a scaling, i.e., the scaling S3(2.5, 2.5, 5) is V3/A3.
Step 1005, using the product of the scaling and the intermediate animation data as the animation data of the virtual object executing the target action.
In some embodiments, the target action comprises an open mirror action, the intermediate animation data comprises position data of a hand of the virtual object during the open mirror, and the product of the scaling and the position data of the hand during the open mirror is taken as the animation data of the hand position when the virtual object performs the open mirror action.
Illustratively, when playing the open-mirror animation, the displacement data V of each frame of bone output in the reference animation resource is scaled using S3, that is, the final result is R3 × V.
In some embodiments, the start hand position of the virtual object during the open mirror process begins at a first position point coordinate of the virtual object in the first form and ends at an end hand position to a second position point coordinate of the virtual object in the second form.
In some embodiments, arm bones are adjusted synchronously based on a bone binding system during the adjustment of the hand position. The skeleton binding system is a system for correcting the arm effect in the game animation system, and is used for adjusting the animation effect of the whole arm according to the positions of the hand skeleton points through preset constraint calculation.
Step 1006, playing the animation of the virtual object execution target action based on the animation data.
The first displacement vector represents a movement amplitude required by the virtual object to execute the target action, and the second displacement vector represents a movement amplitude configured in the reference animation resource, that is, a scaling ratio determined according to the first displacement vector and the second displacement vector is used for indicating a coordinate adjustment ratio in the reference animation resource, and after the animation data in the reference animation resource is subjected to coordinate adjustment according to the adjustment ratio, the adaptive animation data of the current virtual object to execute the target action on the virtual prop is obtained.
In summary, according to the animation playing method provided in the embodiment of the present application, since the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, that is, the reference animation resource is an animation resource corresponding to a group of virtual props, when the target action is executed, the virtual object can perform scaling adjustment by using the reference animation resource when holding any prop of the group of virtual props, so that animation rendering and playing are implemented, animation configuration efficiency corresponding to the target action is improved, and resource occupation and device calculation amount are reduced.
According to the method provided by the embodiment, after the scaling is obtained through calculation, the scaling adjustment is performed on the reference animation resource through the scaling, so that the action animation corresponding to the current virtual prop and the target action can be obtained, and the adaptability of animation rendering is improved.
Fig. 11 is a schematic diagram of an overall process of open mirror animation provided in an exemplary embodiment of the present application, as shown in fig. 11, the process includes:
at step 1101, the player clicks on the open mirror control.
Namely, the player triggers the mirror opening control, inputs a mirror opening instruction, and instructs the virtual character to play the mirror opening animation after receiving the mirror opening instruction based on the virtual firearm.
Step 1102, animation updating.
That is, when the animation logic of the player character is updated, the hand position point V1 before the mirror is opened and the hand position point V2 at the time of aiming are acquired, and the vector V3 is calculated as V2-V1.
And 1103, acquiring animation resource data.
And acquiring a starting point A1 and an end point A2 of the hand displacement from animation resources, and calculating a vector A3 as A2-A1.
In step 1104, a scaling is calculated based on the actual displacement.
Dividing V3 by A3 to obtain a ratio data S3.
And 1105, acquiring the open-mirror animation resources according to the scaling.
Each frame of animation data in the open mirror is scaled in real time according to the scale data S3. The current actual hand animation of the virtual firearm from waist to aiming can be adapted.
It should be noted that, in the above example, the open-mirror animation is taken as an example for explanation, and the animation playing method may also be applied to close-mirror animation or prop throwing, which is not limited in the embodiment of the present application.
In summary, according to the animation playing method provided in the embodiment of the present application, since the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, that is, the reference animation resource is an animation resource corresponding to a group of virtual props, when the target action is executed, the virtual object can perform scaling adjustment by using the reference animation resource when holding any prop of the group of virtual props, so that animation rendering and playing are implemented, animation configuration efficiency corresponding to the target action is improved, and resource occupation and device calculation amount are reduced.
Fig. 12 is a flowchart of an animation playing method according to another exemplary embodiment of the present application, which is described by taking an example that the method is applied to a terminal, and as shown in fig. 12, the method includes:
step 1201, receiving a control operation on the virtual object, wherein the control operation is used for controlling the virtual object to execute a target action. The target action is used for controlling the virtual object to be adjusted from a first form holding the virtual prop to a second form. The first form is used for representing the form of the virtual object before the target action is executed, and the second form is used for representing the form of the virtual object after the target action is executed. The first form and the second form are mainly used for indicating the representation form of the virtual object, namely, the state of the virtual object.
In some embodiments, the target action includes a mirror-on/mirror-off action, a throwing action, and the like.
In the embodiments of the present application, the target operation is realized as the mirror-on/mirror-off operation as an example.
Illustratively, taking open mirror as an example, an open mirror control operation is received, and the open mirror control operation is used for controlling the virtual object to execute an open mirror action on the virtual gun. Before opening the mirror, the virtual environment is observed at the first or third person-named visual angle of the virtual object, and after opening the mirror, the virtual environment is observed through a sighting telescope matching piece assembled on a virtual gun held by the virtual object.
And step 1202, displaying the animation of the virtual object execution target action based on the control operation.
In the execution process of the target action, the target body part of the virtual object changes along a target change path, the target change path is obtained by scaling adjustment on the basis of the reference change path, and the scaling corresponds to the virtual prop held by the virtual object.
In some embodiments, the target variation path corresponds to the first unique variation, the reference variation path corresponds to the second displacement vector, and the target variation path is a path obtained by adjusting the reference variation path based on the scaling ratio corresponding to the first displacement vector and the second displacement vector.
Optionally, the first displacement vector is a displacement vector corresponding to the first form and the second form acquired based on the pose of the virtual object;
the second displacement vector is a displacement vector corresponding to the first form and the second form, which is obtained based on a reference animation resource, and the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop.
Optionally, based on the posture of the virtual object, a first displacement vector corresponding to the target body position of the virtual object in the first form and the second form is obtained. In this case, when the first displacement vector is determined, the first displacement vector corresponding to the hand position of the virtual object in the first form and the second form is obtained.
The reference animation resources comprise preset animation resources corresponding to the target action, and coordinate data corresponding to the target body position of the virtual object are included. Optionally, taking the open mirror motion as an example, the reference animation resource includes a segment of animation frame sequence in the open mirror process, each frame includes data (displacement, rotation, scaling, etc.) of each skeletal point of the virtual object, and usually, the game engine will press the animation frame to play the animation, thereby making the motion expression.
Namely, hand skeleton point data corresponding to a first frame of animation frame and hand skeleton point data corresponding to a last frame of animation frame in the reference animation resource are obtained, and a second displacement vector is determined according to the difference between the two hand skeleton point data. Wherein the second displacement vector represents the hand movement amplitude of the mirror-opening action in the reference animation resource.
The reference animation resources comprise preset animation resources corresponding to the target action, and coordinate data corresponding to the target body position of the virtual object are included. Optionally, taking the open mirror motion as an example, the reference animation resource includes a segment of animation frame sequence in the open mirror process, each frame includes data (displacement, rotation, scaling, etc.) of each skeletal point of the virtual object, and usually, the game engine will press the animation frame to play the animation, thereby making the motion expression.
Namely, hand skeleton point data corresponding to a first frame of animation frame and hand skeleton point data corresponding to a last frame of animation frame in the reference animation resource are obtained, and a second displacement vector is determined according to the difference between the two hand skeleton point data. Wherein the second displacement vector represents the hand movement amplitude of the mirror-opening action in the reference animation resource.
Fig. 13 is a block diagram of a structure of an animation playback device according to an exemplary embodiment of the present application, where as shown in fig. 13, the device includes:
a receiving module 1310, configured to receive a control operation on a virtual object, where the control operation is used to control the virtual object to perform a target action, and the target action is used to control the virtual object to adjust from a first form holding a virtual prop to a second form;
an obtaining module 1320, configured to obtain a first displacement vector corresponding to the first form and the second form based on the posture of the virtual object;
the obtaining module 1320 is further configured to obtain a second displacement vector corresponding to the first form and the second form in a reference animation resource, where the reference animation resource is an adaptive animation resource corresponding to the item type of the virtual item and the target action, and the reference animation resource further includes intermediate animation data of the target action, and the intermediate animation data is used to indicate an action process of the target action;
a playing module 1330, configured to adjust the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector, so as to obtain animation data of the virtual object executing the target action for playing.
In an optional embodiment, the obtaining module 1320 is further configured to obtain a first location point coordinate of a target body location of the virtual object in the current first form;
as shown in fig. 14, the apparatus further includes:
determining module 1340, configured to determine a target adjustment position of the virtual item; determining second location point coordinates of the target body position of the virtual object in the second state based on the target adjusted position; and taking the difference between the second position point coordinate and the first position point coordinate as the first displacement vector.
In an optional embodiment, the virtual prop is a virtual firearm, and the target action is a scope opening action;
the determining module 1340 is further configured to determine a coincidence position of a sighting telescope centroid and a central point of an observation range of the virtual object after the virtual firearm is opened; and taking the coincident position of the sighting telescope collimation center and the observation range central point as the target adjustment position of the virtual gun.
In an optional embodiment, the obtaining module 1320 is further configured to obtain coordinates of a first position point of a hand of the virtual object before the current mirror opening;
the determining module 1340 is further configured to determine, based on the target adjustment position, a second position point coordinate of the hand of the virtual object after the mirror is opened.
In an optional embodiment, the apparatus further comprises:
determining module 1340, configured to determine a prop type of the virtual prop;
the obtaining module 1320 is further configured to obtain the reference animation resource corresponding to the prop type and the target action; acquiring an initial position point coordinate and an end position point coordinate of the target body position of the virtual object in the reference animation resource, wherein the initial position point coordinate corresponds to the first form, and the end position point coordinate corresponds to the second form;
the determining module 1340 is further configured to use a difference between the coordinates of the ending position point and the coordinates of the starting position point as the second displacement vector.
In an alternative embodiment, the virtual prop comprises a virtual firearm;
the obtaining module 1320 is further configured to obtain the reference animation resource corresponding to the virtual firearm and the target motion;
alternatively, the first and second electrodes may be,
the obtaining module 1320 is further configured to obtain the reference animation resource corresponding to the gun type of the virtual gun and the target action.
In an optional embodiment, the apparatus further comprises:
a determining module 1340 for determining the scaling based on a ratio between the first displacement vector and the second displacement vector; and taking the product of the scaling and the intermediate animation data as the animation data of the virtual object for executing the target action.
In an optional embodiment, the target action comprises an open mirror action, and the intermediate animation data comprises position data of the hand of the virtual object in the open mirror process;
the determining module 1340 is further configured to take a product of the scaling and the position data of the hand during the mirror-opening process as animation data of the hand position when the virtual object performs the mirror-opening action.
In an alternative embodiment, the receiving module 1310 is further configured to receive a mirror opening control operation, where the mirror opening control operation is used to control the virtual object to perform the mirror opening action on the virtual firearm.
In an optional embodiment, the playing module 1330 is further configured to perform a synchronous adjustment on the arm skeleton based on a skeleton binding system during the adjustment of the hand position.
The present application provides another animation playback apparatus, including:
the receiving module is used for receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
the display module is used for displaying the animation of the virtual object executing the target action based on the control operation;
in the execution process of the target action, the target body part of the virtual object changes along a target change path, the target change path is obtained by scaling adjustment on the basis of a reference change path, and the scaling corresponds to the virtual prop held by the virtual object.
In an alternative embodiment, the target varying path corresponds to a first displacement vector, and the reference varying path corresponds to a second displacement vector;
the target change path is a path obtained by adjusting the reference change path based on the scaling ratios corresponding to the first displacement vector and the second displacement vector.
In an optional embodiment, the first displacement vector is a displacement vector corresponding to the first form and the second form acquired based on a pose of the virtual object;
the second displacement vector is a movement vector corresponding to the first form and the second form, which is obtained based on a reference animation resource, and the reference animation resource is an adaptive animation resource corresponding to the item type of the virtual item and the target action.
To sum up, according to the animation playback apparatus provided in this application, since the reference animation resource is an adaptive animation resource corresponding to the prop type and the target action of the virtual prop, that is, the reference animation resource is an animation resource corresponding to a set of virtual props, when the target action is executed, the virtual object can be scaled and adjusted by using the reference animation resource when holding any prop of the set of virtual props, so that animation rendering playback is achieved, animation configuration efficiency corresponding to the target action is improved, and resource occupation and device calculation amount are reduced.
It should be noted that: the animation playback device provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the animation playing device and the animation playing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 15 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement the animation playback method provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera 1506, an audio circuit 1507, and a power supply 1508.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, providing the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in still other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
A power supply 1508 is used to power the various components in terminal 1500. The power source 1508 may be ac, dc, disposable, or rechargeable. When the power source 1508 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, optical sensor 1514, and proximity sensor 1515.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen display 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side bezel of terminal 1500 and/or underneath touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the touch display 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
An optical sensor 1514 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of touch display 1505 based on the ambient light intensity collected by optical sensor 1514. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1514.
A proximity sensor 1515, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1515 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1515 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the touch display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1515 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the touch display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (17)

1. An animation playing method, characterized in that the method comprises:
receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
acquiring a first displacement vector corresponding to the first form and the second form based on the pose of the virtual object;
acquiring a second displacement vector corresponding to the first form and the second form in a reference animation resource, wherein the reference animation resource is an adaptive animation resource corresponding to the prop type of the virtual prop and the target action, the reference animation resource further comprises intermediate animation data of the target action, and the intermediate animation data is used for indicating the action process of the target action;
adjusting the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector to obtain animation data of the virtual object executing the target action;
playing the animation of the virtual object executing the target action based on the animation data.
2. The method of claim 1, wherein the obtaining a first displacement vector corresponding to the first and second modalities based on the pose of the virtual object comprises:
acquiring a first position point coordinate of the target body position of the virtual object in a current first form;
determining a target adjustment position of the virtual prop;
determining second location point coordinates of the target body position of the virtual object in the second state based on the target adjusted position;
and taking the difference between the second position point coordinate and the first position point coordinate as the first displacement vector.
3. The method of claim 2, wherein the virtual prop is a virtual firearm, the targeting action being a speculum action;
the determining a target adjustment position of the virtual prop includes:
determining the coincidence position of the sighting telescope centroid and the central point of the observation range of the virtual object after the virtual gun is opened;
and taking the coincident position of the sighting telescope collimation center and the observation range central point as the target adjustment position of the virtual gun.
4. The method of claim 3, wherein the obtaining first location point coordinates of the target body location of the virtual object in the current first modality comprises:
acquiring a first position point coordinate of a hand of the virtual object before the current mirror opening;
the determining second location point coordinates of the target body position of the virtual object in the second state based on the target adjusted position comprises:
and determining second position point coordinates of the hand of the virtual object after the mirror is opened based on the target adjusting position.
5. The method according to any one of claims 1 to 4, wherein the obtaining a second displacement vector corresponding to the first modality and the second modality in the reference animation resource comprises:
determining a prop type of the virtual prop;
acquiring the reference animation resource corresponding to the prop type and the target action;
acquiring an initial position point coordinate and an end position point coordinate of the target body position of the virtual object in the reference animation resource, wherein the initial position point coordinate corresponds to the first form, and the end position point coordinate corresponds to the second form;
and taking the difference between the coordinates of the ending position point and the coordinates of the starting position point as the second displacement vector.
6. The method of claim 5, wherein the virtual prop comprises a virtual firearm;
the obtaining the reference animation resource corresponding to the prop type and the target action comprises:
acquiring the reference animation resource corresponding to the virtual gun and the target action;
alternatively, the first and second electrodes may be,
and acquiring the reference animation resource corresponding to the gun type of the virtual gun and the target action.
7. The method according to any one of claims 1 to 4, wherein the adjusting the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector to obtain the animation data of the virtual object performing the target action comprises:
determining the scaling based on a ratio between the first displacement vector and the second displacement vector;
and taking the product of the scaling and the intermediate animation data as the animation data of the virtual object for executing the target action.
8. The method of claim 7, wherein the target action comprises an open mirror action, and the intermediate animation data comprises position data of a hand of the virtual object during the open mirror action;
the taking of the product of the scaling and the intermediate animation data as the animation data of the virtual object to perform the target action includes:
and taking the product of the scaling and the position data of the hand in the process of opening the mirror as animation data of the hand position when the virtual object executes the mirror opening action.
9. The method of claim 8, wherein receiving a control operation on a virtual object comprises:
and receiving a mirror opening control operation, wherein the mirror opening control operation is used for controlling the virtual object to execute the mirror opening action on the virtual gun.
10. The method of claim 8, further comprising:
and in the adjustment process of the hand position, synchronously adjusting the arm bones based on a bone binding system.
11. An animation playing method, characterized in that the method comprises:
receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
displaying an animation of the virtual object executing the target action based on the control operation;
in the execution process of the target action, the target body part of the virtual object changes along a target change path, the target change path is obtained by scaling adjustment on the basis of a reference change path, and the scaling corresponds to the virtual prop held by the virtual object.
12. The method of claim 11, wherein the target varying path corresponds to a first displacement vector and the reference varying path corresponds to a second displacement vector;
the target change path is a path obtained by adjusting the reference change path based on the scaling ratios corresponding to the first displacement vector and the second displacement vector.
13. The method of claim 12,
the first displacement vector is a displacement vector corresponding to the first form and the second form acquired based on the posture of the virtual object;
the second displacement vector is a movement vector corresponding to the first form and the second form, which is obtained based on a reference animation resource, and the reference animation resource is an adaptive animation resource corresponding to the item type of the virtual item and the target action.
14. An animation playback apparatus, comprising:
the receiving module is used for receiving a control operation on a virtual object, wherein the control operation is used for controlling the virtual object to execute a target action, and the target action is used for controlling the virtual object to be adjusted from a first form holding a virtual prop to a second form;
an obtaining module configured to obtain a first displacement vector corresponding to the first form and the second form based on a pose of the virtual object;
the obtaining module is further configured to obtain a second displacement vector corresponding to the first form and the second form in a reference animation resource, where the reference animation resource is an adaptive animation resource corresponding to the prop type of the virtual prop and the target action, the reference animation resource further includes intermediate animation data of the target action, and the intermediate animation data is used to indicate an action process of the target action;
the playing module is used for adjusting the intermediate animation data based on the scaling corresponding to the first displacement vector and the second displacement vector to obtain animation data of the virtual object executing the target action; playing the animation of the virtual object executing the target action based on the animation data.
15. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the animation playback method as claimed in any one of claims 1 to 13.
16. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the animation playback method as claimed in any one of claims 1 to 13.
17. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the animation playback method as claimed in any one of claims 1 to 13.
CN202111628772.6A 2021-10-15 2021-12-28 Animation playing method, device, equipment, readable storage medium and program product Pending CN114037783A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111204137.5A CN113920227A (en) 2021-10-15 2021-10-15 Animation playing method, device, equipment, readable storage medium and program product
CN2021112041375 2021-10-15

Publications (1)

Publication Number Publication Date
CN114037783A true CN114037783A (en) 2022-02-11

Family

ID=79240893

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111204137.5A Withdrawn CN113920227A (en) 2021-10-15 2021-10-15 Animation playing method, device, equipment, readable storage medium and program product
CN202111628772.6A Pending CN114037783A (en) 2021-10-15 2021-12-28 Animation playing method, device, equipment, readable storage medium and program product

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111204137.5A Withdrawn CN113920227A (en) 2021-10-15 2021-10-15 Animation playing method, device, equipment, readable storage medium and program product

Country Status (1)

Country Link
CN (2) CN113920227A (en)

Also Published As

Publication number Publication date
CN113920227A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN109126129B (en) Method, device and terminal for picking up virtual article in virtual environment
CN111589128B (en) Operation control display method and device based on virtual scene
CN109529319B (en) Display method and device of interface control and storage medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN111035918B (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN110045827B (en) Method and device for observing virtual article in virtual environment and readable storage medium
CN111589142A (en) Virtual object control method, device, equipment and medium
CN111603771B (en) Animation generation method, device, equipment and medium
CN108786110B (en) Method, device and storage medium for displaying sighting telescope in virtual environment
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN111026318B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111273780B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111325822B (en) Method, device and equipment for displaying hot spot diagram and readable storage medium
CN111603770A (en) Virtual environment picture display method, device, equipment and medium
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN112330823A (en) Virtual item display method, device, equipment and readable storage medium
CN111589116A (en) Method, device, terminal and storage medium for displaying function options
CN113134232B (en) Virtual object control method, device, equipment and computer readable storage medium
CN112755517B (en) Virtual object control method, device, terminal and storage medium
CN114130023A (en) Virtual object switching method, device, equipment, medium and program product
CN111672115A (en) Virtual object control method and device, computer equipment and storage medium
CN113633970B (en) Method, device, equipment and medium for displaying action effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination