CN114677462A - Animation creating method and device - Google Patents

Animation creating method and device Download PDF

Info

Publication number
CN114677462A
CN114677462A CN202110353933.9A CN202110353933A CN114677462A CN 114677462 A CN114677462 A CN 114677462A CN 202110353933 A CN202110353933 A CN 202110353933A CN 114677462 A CN114677462 A CN 114677462A
Authority
CN
China
Prior art keywords
frame
image
animation
target
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110353933.9A
Other languages
Chinese (zh)
Inventor
苟亚明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Cloud Computing Beijing Co Ltd
Priority to CN202110353933.9A priority Critical patent/CN114677462A/en
Publication of CN114677462A publication Critical patent/CN114677462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application discloses a method and a device for creating animation, wherein the method comprises the following steps: when target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters; generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element; and creating an animation consisting of the plurality of frames of images, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element. The method provided by the application is beneficial to reducing the load of the server so as to improve the utilization rate of the system resources of the server.

Description

Animation creating method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for creating an animation.
Background
With the development of the times, more and more software can realize the playing of the particle animation. For example, taking chat software as an example, after the user inputs the text "happy birthday" in the conversation window, the terminal device may play an animation in which a plurality of cakes fall.
Particle animation is usually drawn by using a User interface view (UIView), but drawing particle animation by UIView occupies a large amount of Central Processing Units (CPUs), which results in large consumption of system resources.
Disclosure of Invention
The embodiment of the invention provides a method and a device for creating an animation, which can be beneficial to reducing the load of a server so as to improve the utilization rate of system resources of the server.
In one aspect, an embodiment of the present invention provides an animation creating method, including:
when target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters;
generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element;
and creating an animation consisting of the plurality of frames of images, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
In one embodiment, when the performing step generates a multi-frame image based on the image frame parameter and the position parameter of the at least one first element, the method includes: determining a target image frame of which a second element corresponding to each first element exists in the multi-frame image according to the image frame parameters of each first element in the at least one first element, wherein the target image frame comprises at least two frame images; determining the position of each second element in each target image frame according to the position parameter of each first element; and generating the multi-frame image according to the position of each second element in each target image frame.
In one embodiment, the image frame parameters include start key frame information, end key frame information, and a frame rate, and the target image frame includes a start image frame, an end image frame, and an intermediate image frame;
when the executing step determines that the second element corresponding to each first element exists in the target image frame in the multi-frame image according to the image frame parameter of each first element in the at least one first element, the method comprises the following steps: determining a starting image frame of each second element in the multi-frame image according to the starting key frame information of each first element in the at least one first element; determining an ending image frame of each second element existing in the multi-frame image according to the ending key frame information of each first element; and determining that the second elements exist in an intermediate image frame in the multi-frame image according to the frame rate of the first elements, wherein the intermediate image frame is positioned between the starting image frame and the ending image frame.
In one embodiment, the location parameters include coordinate information;
when the performing step determines the position of the respective second element in the respective target image frame based on the position parameter of the respective first element, the method comprises: and determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames.
In one embodiment, the location parameters further include pose information;
when the performing step determines the position of the respective second element in the respective target image frames according to the coordinate information of the respective first elements and the number of the target image frames, the method includes: determining the attitude information of each second element in each target image frame according to the attitude information of each first element and the number of the target image frames; and generating the multi-frame image according to the position and the posture information of the second elements in the target image frames.
In one embodiment, the multi-frame image includes the respective second elements and the interaction objects corresponding to the respective second elements, and the positions of the respective second elements in the multi-frame image are the same as the positions of the interaction objects corresponding to the respective second elements in the multi-frame image;
after creating the animation composed of the plurality of frames of images, the method further includes: when a trigger event sent by a client is received, searching an animation matched with the trigger event, and sending the animation to the client, wherein the trigger event is generated when the client detects the operation of a user interface; receiving an operation event sent by the client, wherein the operation event is generated when the client monitors the operation of the second element contained in any image frame in the animation through an interactive object corresponding to any second element in the process of playing the animation; generating at least one frame of image corresponding to the operation event; and sending the at least one frame of image to the client so that the client displays the at least one frame of image.
In one embodiment, before generating the multi-frame image based on the image frame parameter and the position parameter of the at least one first element, the method further comprises: generating an interactive object corresponding to each second element in the at least one second element; and generating a multi-frame image comprising the second elements and the interactive objects corresponding to the second elements according to the image frame parameters and the position parameters of the at least one first element.
In one embodiment, the playing interface for playing the animation and the user interface are displayed in different layers of the client, and the display interface for displaying the at least one frame of image and the playing interface are displayed in different layers of the client.
In one embodiment, before generating at least one image corresponding to the operation event, the method further includes: searching a target interactive object from a memory, wherein a second element corresponding to the target interactive object is matched with any third element contained in the at least one frame of image; determining the interactive object corresponding to the third element as the target interactive object; and generating at least one frame of image comprising the third element and the target interactive object according to the operation event, wherein the position of the third element in the at least one frame of image is the same as the position of the target interactive object in the at least one frame of image.
In one embodiment, before searching the target interactive object from the memory, the method further comprises: when element indication information sent by a client is received, storing an interactive object corresponding to a target second element indicated by the element indication information into a memory, wherein the element indication information is generated when the client monitors the operation of the target second element contained in any image frame in the animation in the process of playing the animation through the interactive object corresponding to the target second element.
In one aspect, the present invention provides an apparatus for animation creation, including an obtaining unit and a processing unit:
the acquiring unit is used for extracting animation parameters of at least one first element in target animation data when the target animation data are loaded from a preset animation library, wherein the animation parameters comprise image frame parameters and position parameters;
the processing unit is used for generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element;
the processing unit is further used for creating an animation composed of the plurality of frames of images, and the animation comprises at least one second element which is in one-to-one correspondence with the at least one first element.
In one embodiment, the processing unit, when generating the multi-frame image based on the image frame parameter and the position parameter of the at least one first element, is configured to: determining a target image frame of which a second element corresponding to each first element exists in the multi-frame image according to the image frame parameters of each first element in the at least one first element, wherein the target image frame comprises at least two frame images; determining the position of each second element in each target image frame according to the position parameter of each first element; and generating the multi-frame image according to the position of each second element in each target image frame.
In one embodiment, the image frame parameters include start key frame information, end key frame information, and a frame rate, and the target image frame includes a start image frame, an end image frame, and an intermediate image frame;
the processing unit, when determining that the second element corresponding to each first element exists in the target image frame in the multi-frame image according to the image frame parameter of each first element in the at least one first element, is configured to: determining a starting image frame of each second element in the multi-frame image according to the starting key frame information of each first element in the at least one first element; determining an ending image frame of each second element existing in the multi-frame image according to the ending key frame information of each first element; and determining that the second elements exist in an intermediate image frame in the multi-frame image according to the frame rate of the first elements, wherein the intermediate image frame is positioned between the starting image frame and the ending image frame.
In one embodiment, the location parameters include coordinate information;
the processing unit, when determining the position of the respective second element in the respective target image frame according to the position parameter of the respective first element, is configured to: and determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames.
In one embodiment, the location parameters further include pose information;
the processing unit, when determining the position of the respective second element in the respective target image frame according to the coordinate information of the respective first element and the number of the target image frames, is configured to: determining the attitude information of each second element in each target image frame according to the attitude information of each first element and the number of the target image frames; and generating the multi-frame image according to the position and posture information of each second element in each target image frame.
In one embodiment, the multi-frame image includes the respective second elements and the interaction objects corresponding to the respective second elements, and the positions of the respective second elements in the multi-frame image are the same as the positions of the interaction objects corresponding to the respective second elements in the multi-frame image;
The apparatus further comprises a communication unit: the communication unit is used for searching the animation matched with the trigger event when the trigger event sent by the client is received after the processing unit is used for creating the animation composed of the multi-frame images, and sending the animation to the client, wherein the trigger event is generated when the client detects the operation on the user interface; the communication unit is further configured to receive an operation event sent by the client, where the operation event is generated when the client monitors an operation on the second element included in any image frame in the animation through an interactive object corresponding to any second element in the process of playing the animation; the processing unit is also used for generating at least one frame of image corresponding to the operation event; the processing unit is further configured to send the at least one frame of image to the client, so that the client displays the at least one frame of image.
In one embodiment, the processing unit, before the processing unit is configured to generate the multi-frame image based on the image frame parameter and the position parameter of the at least one first element, is further configured to: generating an interactive object corresponding to each second element in the at least one second element; the generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element comprises: and generating a multi-frame image comprising the second elements and the interactive objects corresponding to the second elements according to the image frame parameters and the position parameters of the at least one first element.
In one embodiment, the playing interface for playing the animation and the user interface are displayed in different layers of the client, and the display interface for displaying the at least one frame of image and the playing interface are displayed in different layers of the client.
In one embodiment, before the processing unit is configured to generate the at least one frame of image corresponding to the operation event, the processing unit is further configured to: searching a target interactive object from a memory, wherein a second element corresponding to the target interactive object is matched with any third element contained in the at least one frame of image; determining the interactive object corresponding to the third element as the target interactive object; and generating at least one frame of image comprising the third element and the target interactive object according to the operation event, wherein the position of the third element in the at least one frame of image is the same as the position of the target interactive object in the at least one frame of image.
In one embodiment, the processing unit, before the processing unit is configured to retrieve the target interaction object from the memory, is further configured to: when element indication information sent by a client is received, storing an interactive object corresponding to a target second element indicated by the element indication information into a memory, wherein the element indication information is generated when the client monitors operation on the target second element contained in any image frame in the animation in the process of playing the animation through the interactive object corresponding to the target second element.
In one aspect, an embodiment of the present invention provides a server, including:
a processor adapted to implement one or more instructions; and
a computer storage medium storing one or more instructions adapted to be loaded and executed by the processor to:
when target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters; generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element; and creating an animation consisting of the plurality of frames of images, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
In one aspect, an embodiment of the present invention provides a computer storage medium, where computer program instructions are stored in the computer storage medium, and when executed by a processor, the computer program instructions are configured to perform:
when target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters; generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element; and creating an animation consisting of the plurality of frames of images, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
In one aspect, an embodiment of the present invention provides a computer program product or a computer program, where the computer program product includes a computer program, and the computer program is stored in a computer storage medium; a processor of the server reads the computer instructions from the computer storage medium, the processor performing:
when target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters; generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element; and creating an animation consisting of the plurality of frames of images, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
Based on the animation creating method, when the target animation data is loaded from the preset animation library, the server extracts the animation parameters contained in the target animation data, generates multi-frame images according to the animation parameters, and creates the animation composed of the multi-frame images. Therefore, the server does not need to draw multi-frame images by calling a large number of drawing functions, only needs to load the target animation data, and can create the animation based on the animation parameters of the target animation data. The method can be beneficial to reducing the load of the server so as to improve the utilization rate of the system resource of the server.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a communication system provided in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for creating an animation according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a multi-frame image according to an embodiment of the present invention;
FIG. 4 is a flowchart of another animation creation method provided by an embodiment of the invention;
FIG. 5 is a schematic diagram of a human-machine interaction provided by an embodiment of the invention;
FIG. 6 is a schematic diagram of yet another human-machine interaction provided by embodiments of the invention;
FIG. 7 is a flowchart of another animation creation method provided by an embodiment of the invention;
FIG. 8 is a schematic structural diagram of an animation creation apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Cloud computing (cloud computing) refers to a delivery and use mode of an IT infrastructure, and refers to obtaining required resources in an on-demand and easily-extensible manner through a network; the generalized cloud computing refers to a delivery and use mode of a service, and refers to obtaining a required service in an on-demand and easily-extensible manner through a network. Such services may be IT and software, internet related, or other services. The cloud Computing is a product of development and fusion of traditional computers and Network Technologies, such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), Parallel Computing (Parallel Computing), Utility Computing (Utility Computing), Network Storage (Network Storage Technologies), Virtualization (Virtualization), Load balancing (Load Balance), and the like.
With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.
The invention provides a method for creating an animation, which loads target animation data from a preset animation library and creates the animation based on animation parameters contained in the target animation data. The method can be beneficial to reducing the load of the server so as to improve the utilization rate of the system resources of the server.
In order to implement the method proposed by the present invention, an embodiment of the present invention provides a communication system, please refer to fig. 1. The communication system includes, but is not limited to, one or more clients and one or more servers. In fig. 1, a client 101 and a server 102 are taken as an example. Wherein a communication can be established between a client 101 and a server 102. The number and configuration of the devices shown in FIG. 1 are for example and do not constitute a limitation on the embodiments of the invention.
The client 101 may be a terminal device (UE), such as a mobile phone (mobile phone), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal, an Augmented Reality (AR) terminal, a wireless terminal in self driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid, a wireless terminal in transportation safety (transportation safety), or the like, or a wearable device such as a smart watch or a bracelet.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
In a specific implementation, when the target animation data is loaded from the preset animation library, the server 102 may extract animation parameters of at least one first element in the target animation data, where the animation parameters include image frame parameters and position parameters, and then the server 102 may generate a multi-frame image according to the image frame parameters and the position parameters of the at least one first element, and create an animation composed of the multi-frame image, where the animation includes at least one second element, where the at least one second element corresponds to the at least one first element one to one.
In one embodiment, the client 101 generates a trigger event when detecting a user-triggered event, and sends the trigger event to the server 102; after receiving the trigger event sent by the client 101, the server 102 further searches for an animation matched with the trigger event, and sends the animation to the client 101; after receiving the animation sent by the server 102, the client plays the animation; in the process of playing the animation, the client detects an event of the animation operation performed by the user, generates an operation event and sends the operation event to the server 102; the server 102 receives an operation event sent by the client 101, generates at least one frame of image corresponding to the operation event, and sends the at least one frame of image to the client 101, and the client 101 displays the image after receiving the at least one frame of image.
Based on the communication system, the embodiment of the invention provides a method for creating animation. Fig. 2 is a schematic flow chart illustrating a method for creating an animation according to an embodiment of the present invention. The animation creation method can be applied to the communication system shown in fig. 1, and is executed by a server, and particularly can be executed by a processor of the server. The animation creating method comprises the following steps:
S201, when target animation data are loaded from a preset animation library, a server extracts animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters.
In one embodiment, the preset animation library includes a plurality of animation data, the animation data includes at least one first element, each element has corresponding animation parameters, and different elements have different corresponding animation parameters.
In one embodiment, the image frame parameters include start key frame information, end key frame information, and frame rate. The start key frame information of the first element comprises information of a start image frame of the second element corresponding to the first element in the animation to be created, the end key frame of the first element comprises information of an end image frame of the second element corresponding to the first element in the animation to be created, and the frame rate of the first element refers to the number of frames per second displayed by the second element when the second element corresponding to the first element plays the animation to be created.
In one embodiment, the location parameters include coordinate information. The coordinate information of the first element comprises the corresponding position information of the second element corresponding to the first element in each image frame in the animation to be created.
In one embodiment, the position parameters include pose information, which may also contain rotation parameters and scaling parameters. The attitude information of the first element comprises the corresponding attitude information of the second element corresponding to the first element in each image frame in the animation to be created.
Optionally, the animation library may be a clitoris (Lottie) animation library, an animation library locally stored in advance by a server, or an animation library stored by another server, and the embodiment of the present invention does not limit the type of the animation library.
Optionally, when the target animation data is loaded from the preset animation library, the method for extracting the animation parameter of the at least one first element in the target animation data by the server may be to acquire the animation parameter of the at least one first element in a hook (hook) manner.
Optionally, the target animation data may be a data (json) file, and the json file may include frame rate, start key frame information, end key frame information, view width information, view height information, image name information, resource set information, layer set information, mask set information, and the like.
Optionally, the target animation may be a particle animation, and the elements included in the target animation data may be particle elements in the particle animation, for example, a user sends "happy birthday" in a session window to trigger playing of the target animation, at this time, the target animation is a number of birthday cakes that fall down on a user interface of the client, and the birthday cakes are particle elements. In addition to the above-described particle animation, the target animation may also be other types of animations, and the type of the target animation is not limited in the embodiment of the present invention.
S202, the server generates a multi-frame image according to the image frame parameter and the position parameter of the at least one first element.
In one embodiment, the server may determine, according to the image frame parameter of each first element of the at least one first element, that the second element corresponding to each first element exists in a target image frame in the multi-frame image, where the target image frame includes at least two frame images.
The image frame parameters include start key frame information, end key frame information and frame rate. The server may determine, according to the start key frame information of each first element of the at least one first element, a start image frame in which the respective second element exists in the multi-frame image, where the start image frame refers to an image frame of a first frame in which the second element appears in the multi-frame image; the server may determine, based on the ending key frame information of the respective first elements, an ending image frame in which the respective second elements exist in the multi-frame image, the ending image frame referring to an image frame of a last frame in which the second elements appear in the multi-frame image; the server may determine, according to the frame rate of the respective first elements, that the respective second elements exist in an intermediate image frame in the multi-frame image, where the intermediate image frame refers to an image frame in which the second elements appear in the middle of the multi-frame image, and the intermediate image frame is located between the start image frame and the end image frame. The method for determining the intermediate image frame specifically includes that the frame number displayed every second of the second element corresponding to the first element when the animation is created and played can be determined through the frame rate of the first element, and according to the previously determined starting image frame, ending image frame, total frame number between the starting image frame and the ending image frame and the frame rate of the animation to be created, the intermediate image frame of the second element in the multi-frame image can be determined.
In one embodiment, the server may determine the position of the respective second elements in the respective target image frames according to the position parameters of the respective first elements; and generating the multi-frame image according to the position of each second element in each target image frame.
The position parameters comprise coordinate information and attitude information, and the attitude information comprises a scaling parameter and a rotation parameter. The server can determine the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames; the server can determine the attitude information of the second elements in the target image frames according to the attitude information of the first elements and the number of the target image frames.
A specific example will be described below. Suppose the server needs to generate an animation of the cake falling at this time. Target animation data are loaded in a preset animation library, the server extracts animation parameters of a first element contained in the target animation data, the first element corresponds to a second element in a multi-frame image, and the second element is a cake element in the animation. It is assumed that the animation for determining the falling of the cake according to the start key frame information, the end key frame information and the frame rate included in the image frame parameters of the first element is composed of three image frames, the first image frame is the start image frame of the cake element, and the third image frame is the end image frame of the cake element. Since the intermediate image frame is between the start image frame and the end image frame, and according to the frame rate, it may be determined that the second image frame is the intermediate image frame of the cake element. According to the position parameter of the first element, the position of the cake element in the three frame image frames can be determined, and the position of the cake element is gradually reduced from the first frame to the third frame because the animation shows the effect of cake falling. By the above described method, the server can generate an animation including the cake element 301 as shown in fig. 3.
S203, the server creates an animation composed of the multi-frame image, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
In one embodiment, the animation created by the server includes at least one second element, and the second element is in one-to-one correspondence with a first element included in target animation data loaded from a preset animation library, so that image frame information, position information and posture information of the second element in the animation can be determined according to an image frame parameter and a position parameter of the first element.
Fig. 4 is a flowchart illustrating a method for creating an animation according to an embodiment of the present invention, where the method for creating an animation is applicable to the communication system shown in fig. 1.
S401, when target animation data are loaded from a preset animation library, the server extracts animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters.
The step S401 is the same as the specific implementation manner of the step S201, and is not described herein again.
S402, the server generates an interactive object corresponding to each second element in the at least one second element.
In one embodiment, each second element has a one-to-one corresponding interactive object, the interactive object corresponding to the second element is used for monitoring the second element, and since the image frame, the position and the posture of the second element in the animation to be created are determined according to the image frame parameter and the position parameter of the first element, the interactive object may be at the same position as the corresponding second element in the animation, so as to be able to monitor the operation event corresponding to the second element and respond to the corresponding target event in time. For example, a user uses a client-side trigger event to play an animation of a plurality of cakes falling on an interface of the client side, the second element is one of the cakes, and if the user clicks the cake, the interaction object corresponding to the second element will hear the operation event information.
Optionally, the interactive object includes an interactive parameter, and the client may determine whether the second element corresponding to the interactive object supports interaction through the parameter.
Optionally, the interactive object further comprises different gesture parameters. The client can monitor and determine different operation gestures of the user on the second element through the interactive object, and can respond to different events according to the different operation gestures of the user. For example, when the client plays an animation of a birthday cake falling, the client can distinguish two different operation events, namely, one cake click by the user and two cake double clicks by the user, through the interactive object, and the client can respond to play two different animations according to the two different operation events. If the user clicks the cake once, the client plays the animation of the cake flying out; if the user double clicks on the cake. The client plays an animation in which the cake is split into two cakes. In this way, the user experience may be increased.
And S403, generating a multi-frame image by the server according to the image frame parameter and the position parameter of the at least one first element.
In one embodiment, the server may generate a multi-frame image including the respective second elements and the interactive objects corresponding to the respective second elements according to the image frame parameter and the position parameter of the at least one first element. The multi-frame image comprises the second elements and the interactive objects corresponding to the second elements, and the positions of the second elements in the multi-frame image are the same as the positions of the interactive objects corresponding to the second elements in the multi-frame image.
S404, the server creates an animation formed by the multi-frame image, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
The step S404 is the same as the step S203, and is not described herein again.
S405, the client generates a trigger event and sends the trigger event to the server.
In one embodiment, the client detects an event that the user triggers the playing of the animation on the user interface, and the client generates a triggering event and sends the triggering event to the server. For example, as shown in fig. 5, the user xiaoming sends "happy birthday" to the opposite user xiaohong in the session window of the client, the user interface of the client plays an animation of landing many cakes, and sending "happy birthday" in xiaoming triggers an animation play event for the user. Besides, the user may trigger the animation playing event in other manners, for example, pressing a client target button, and the like, which is not limited in this embodiment of the present invention.
S406, the server searches the animation matched with the trigger event and sends the animation to the client.
Optionally, the animation may be an animation created by the server in advance, or an animation created by the server based on the trigger event, which is not limited in the embodiment of the present invention.
S407, the client plays the animation.
In one embodiment, the playing interface for playing the animation and the user interface are displayed in different layers of the client. Because the playing interface for playing the animation and the user interface are displayed on different layers of the client, and the interactive object is on the playing interface for playing the animation, the interactive range between the user and the animation is only related to the size of the playing interface for the animation and is not related to the size of the user interface. For example, as shown in fig. 5, the user interface 502 in fig. 5 is on the layer where the bright and reddish conversation window is located, and the animation where the cake being played falls is on the layer where the playing interface 502 is located, the range of the user interface 502 is smaller than the playing interface 501 of the animation, and since the interaction range of the user and the animation is only related to the size of the playing interface of the animation, the user may click an animation element outside the user interface 502 at this time, for example, the user may click the cake 5011 that exceeds the user interface 502 but is on the playing interface 501. The method is beneficial to expanding the interaction range of the user and the animation and improving the experience of the user.
Steps S405 to S407 will be described using a specific example. As shown in fig. 5, when the user is in the session window of the client, sending "happy birthday" to the user xiaohong of the opposite end, and triggering an animation playing event, the client generates a triggering event, the triggering event also includes text content of "happy birthday, and the server extracts a keyword" happy birthday "according to the text content, finds out an animation in which a birthday cake falls, and sends the animation to the client, and at this time, the client plays the animation in which the birthday cake falls.
S408, the client generates an operation event and sends the operation event to the server. S409, the server generates at least one frame of image corresponding to the operation event and sends the at least one frame of image to the client.
Optionally, after receiving the operation event, the server may generate another response event besides generating a frame of image, for example, acquiring information of the user, recording the operation event corresponding to the user, and acquiring a corresponding webpage based on the operation time and sending the webpage to the client, so that the client jumps to the webpage interface.
Optionally, at least one frame of image generated by the server includes at least one third element and an interactive object corresponding to the at least one third element, and the third element may interact with the user. In this way, the user's interactive experience may be increased.
S410, the client displays the at least one frame of image.
In one embodiment, the client displays the at least one frame of image after receiving the at least one frame of image. For example, as shown in FIG. 6, when a user clicks on a cake in the client interface, the client will display an animation of the cake flying off. The animation that this cake flew out is this at least one frame of image that the server sent, and the server has generated the multiframe image of this cake flew out user interface based on this user clicks cake incident to send this multiframe image to the client, the client can play this cake multiframe image that flew out this moment. And the display interface used for displaying the at least one frame of image and the playing interface are displayed in different layers of the client.
Fig. 7 is a flowchart illustrating a method for creating an animation according to an embodiment of the present invention, where the method for creating an animation is applicable to the communication system shown in fig. 1.
S701, when target animation data are loaded from a preset animation library, a server extracts animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters.
S702, the server generates an interactive object corresponding to each second element in the at least one second element.
And S703, the server generates a multi-frame image according to the image frame parameter and the position parameter of the at least one first element.
S704, the server creates an animation formed by the multi-frame image, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
S705, the client generates a trigger event and sends the trigger event to the server.
S706, after receiving the trigger event sent by the client, the server searches for the animation matched with the trigger event and sends the animation to the client.
And S707, the client plays the animation.
S708, the client generates an operation event and sends the operation event to the server.
The steps S701 to S708 are the same as the steps S401 to S408 described above, and are not described herein again.
S709, the server stores the interactive object corresponding to the target second element indicated by the element indication information into a memory.
In one embodiment, the client sends the element indication information to the server when sending the operation event. The element indication information is generated when the client monitors the operation of the target second element contained in any image frame in the animation through the interactive object corresponding to the target second element in the process of playing the animation, the server generates the animation of the multi-frame image not containing the second element based on the time frame corresponding to the trigger operation event and the corresponding second element, and sends the animation to the client, so that the client can subsequently play the multi-frame animation not containing the second element.
Optionally, the memory may be a buffer of the client, and only temporarily stores the interactive object, and the embodiment of the present invention does not limit the type of the memory.
Optionally, the element indication information may be generated after the client finishes playing the end image frame in which the target second element exists in the multi-frame image, and the embodiment of the present invention does not limit the generation time of the element indication information.
S710, the server searches a target interactive object from the memory, and a second element corresponding to the target interactive object is matched with any third element contained in the at least one frame of image.
In one embodiment, the frame rates of the third element and the second element are the same. Since the user performs the operation event on the second element in the animation, any third element contained in the at least one generated frame of image also has a corresponding relationship with the second element, in which case the frame rates of the third element and the second element are kept consistent. Thus. Since elements of different frame rates cannot adopt the same interactive object. The interactive object corresponding to the second element is generated based on the frame rate of the second element when the interactive object is generated, and the frame rates of the third element and the second element are the same, so that the interactive object of the second element can be selected by the third element.
Optionally, the interactive object corresponding to the third element may also be directly generated by the server, and is not required to be obtained from the cache region.
S711, the server determines that the interactive object corresponding to the third element is the target interactive object.
In one embodiment, at least one frame of image including the third element and the target interactive object is generated according to the operation event, and the position of the third element in the at least one frame of image is the same as the position of the target interactive object in the at least one frame of image. In this way, the load of the server is advantageously reduced.
S712, the server generates at least one frame of image corresponding to the operation event, and sends the at least one frame of image to the client.
S713, the client displays the at least one frame of image.
The steps S711 and S712 are the same as the specific implementation of the steps S409 and S410, and are not described herein again.
Based on the animation creating method, the embodiment of the invention provides a device of the animation creating method. Referring to fig. 8, fig. 8 is a schematic structural diagram of an apparatus for creating an animation according to an embodiment of the present invention, where the apparatus 80 includes an obtaining unit 801 and a processing unit 802.
The obtaining unit 801 is configured to, when target animation data is loaded from a preset animation library, extract animation parameters of at least one first element in the target animation data, where the animation parameters include image frame parameters and position parameters;
the processing unit 802 is configured to generate a multi-frame image according to the image frame parameter and the position parameter of the at least one first element;
the processing unit 802 is further configured to create an animation composed of the plurality of frames of images, where the animation includes at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
In one embodiment, the processing unit 802, when generating a multi-frame image based on the image frame parameter and the position parameter of the at least one first element, is configured to: determining a target image frame of which a second element corresponding to each first element exists in the multi-frame image according to the image frame parameters of each first element in the at least one first element, wherein the target image frame comprises at least two frame images; determining the position of each second element in each target image frame according to the position parameter of each first element; and generating the multi-frame image according to the position of each second element in each target image frame.
In one embodiment, the image frame parameters include start key frame information, end key frame information, and a frame rate, and the target image frame includes a start image frame, an end image frame, and an intermediate image frame;
the processing unit 802, when determining that the second element corresponding to each first element exists in the target image frame in the multi-frame image according to the image frame parameter of each first element in the at least one first element, is configured to: determining a starting image frame of each second element in the multi-frame image according to the starting key frame information of each first element in the at least one first element; determining an ending image frame of each second element existing in the multi-frame image according to the ending key frame information of each first element; and determining that the second elements exist in an intermediate image frame in the multi-frame image according to the frame rate of the first elements, wherein the intermediate image frame is positioned between the starting image frame and the ending image frame.
In one embodiment, the location parameters include coordinate information;
the processing unit 802, when determining the position of the respective second element in the respective target image frame according to the position parameter of the respective first element, is configured to: and determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames.
In one embodiment, the location parameters further include pose information;
the processing unit 802, when determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames, is configured to: determining the attitude information of each second element in each target image frame according to the attitude information of each first element and the number of the target image frames; and generating the multi-frame image according to the position and the posture information of the second elements in the target image frames.
In one embodiment, the multi-frame image includes the respective second elements and the interaction objects corresponding to the respective second elements, and the positions of the respective second elements in the multi-frame image are the same as the positions of the interaction objects corresponding to the respective second elements in the multi-frame image;
the apparatus further comprises a communication unit 803: the communication unit 803 is configured to, after the processing unit 802 creates the animation composed of the multiple frames of images, upon receiving a trigger event sent by a client, find an animation matching the trigger event, and send the animation to the client, where the trigger event is generated by the client upon detecting an operation on a user interface; the communication unit 803, after the processing unit 802 is configured to create an animation composed of the multiple frames of images, is further configured to receive an operation event sent by the client, where the operation event is generated when the client monitors, during playing the animation, an operation on the second element included in any image frame in the animation through an interactive object corresponding to any second element; the processing unit 802 is further configured to generate at least one frame of image corresponding to the operation event; the processing unit 802 is further configured to send the at least one frame of image to the client after the processing unit 802 is configured to create the animation composed of the plurality of frames of images, so that the client displays the at least one frame of image.
In one embodiment, the processing unit 802 is further configured to: generating an interactive object corresponding to each second element in the at least one second element before generating the multi-frame image according to the image frame parameter and the position parameter of the at least one first element; and generating a multi-frame image comprising the second elements and the interactive objects corresponding to the second elements according to the image frame parameters and the position parameters of the at least one first element.
In one embodiment, the playing interface for playing the animation and the user interface are displayed in different layers of the client, and the display interface for displaying the at least one frame of image and the playing interface are displayed in different layers of the client.
In one embodiment, the processing unit 802 is further configured to: before generating at least one frame of image corresponding to the operation event, searching a target interactive object from a memory, wherein a second element corresponding to the target interactive object is matched with any third element contained in the at least one frame of image; determining the interactive object corresponding to the third element as the target interactive object; and generating at least one frame of image comprising the third element and the target interactive object according to the operation event, wherein the position of the third element in the at least one frame of image is the same as the position of the target interactive object in the at least one frame of image.
In one embodiment, the processing unit 802 is further configured to: before searching a target interactive object from a memory, when receiving element indication information sent by a client, storing an interactive object corresponding to a target second element indicated by the element indication information into the memory, wherein the element indication information is generated when the client monitors an operation on the target second element contained in any image frame in the animation through the interactive object corresponding to the target second element in the process of playing the animation.
According to an embodiment of the present invention, the steps of performing the method for creating an animation shown in fig. 2, 4 and 7 by using a server as a main execution body can be performed by the units in the device 80 shown in fig. 8. For example, step S201 described in fig. 2 may be performed by the acquisition unit 801 in the apparatus 80 shown in fig. 8, and steps S202 and S203 may be performed by the processing unit 802 in the apparatus 80 shown in fig. 8.
According to another embodiment of the present invention, the units in the apparatus 80 shown in fig. 8 may be respectively or entirely combined into one or several other units to form one or several other units, or some unit(s) therein may be further split into multiple functionally smaller units to form one or several other units, which may achieve the same operation without affecting the achievement of the technical effect of the embodiment of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the apparatus may also include other units, and in practical applications, the functions may also be implemented by being assisted by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, a compiling apparatus to which an application program shown in fig. 8 is applied may be constructed by running a computer program (including program codes) capable of executing steps involved in the respective methods shown in fig. 2, fig. 4 and fig. 7 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM) and a storage element, and a compiling method of an application program according to an embodiment of the present invention may be implemented. The computer program may be embodied on a computer storage medium, for example, and loaded into and executed in the above-described computing apparatus via the computer storage medium.
Based on the embodiment of the animation creation method, the embodiment of the invention provides a server. Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present invention.
The server 90 includes at least one processor 901, which is configured to implement the animation creation method of the server in the method provided by the embodiment of the present application. The processor 901 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, may implement or perform the methods, operations, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The operations of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
The server 90 may further include an input interface 902 and an output interface 903, which are used for implementing the transceiving operation of the server in the method provided by the embodiment of the present application. In an embodiment of the application, the input interface 902 and the output interface 903 may be transceivers, circuits, buses, modules, or other types of communication interfaces for communicating with other devices over a transmission medium. The processor 901 transceives data using the input interface 902 and the output interface 903, and is configured to implement the methods described in the above method embodiments of fig. 2, 4, and 7.
The server 90 may also include at least one memory 905 for storing program instructions and/or data. The processor 901 may operate in conjunction with a memory 905. The processor 901 may execute program instructions stored in the memory 905. At least one of the at least one memory may be included in the processor. In one embodiment, the computer storage medium 904 may be loaded by the processor 901 and executed with one or more instructions stored in the computer storage medium to implement the corresponding steps performed by the server in the animation creation methods described above with respect to fig. 2, 4, and 7. In particular implementations, one or more instructions in the computer storage medium are loaded by the processor 901 and perform the following steps:
When target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters; generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element; and creating an animation consisting of the plurality of frames of images, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
In one embodiment, the processor 901 is specifically configured to perform the following steps when generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element: determining a target image frame of which a second element corresponding to each first element exists in the multi-frame image according to the image frame parameters of each first element in the at least one first element, wherein the target image frame comprises at least two frame images; the processor 901 determines the position of each second element in each target image frame according to the position parameter of each first element; the processor 901 generates the multi-frame image according to the position of the respective second element in the respective target image frames.
In one embodiment, the image frame parameters include start key frame information, end key frame information, and a frame rate, and the target image frames include a start image frame, an end image frame, and an intermediate image frame;
when determining, according to the image frame parameter of each first element in the at least one first element, that a second element corresponding to each first element exists in a target image frame in the multi-frame image, the processor 901 is specifically configured to perform the following steps: determining a starting image frame of each second element in the multi-frame image according to the starting key frame information of each first element in the at least one first element; the processor 901 determines an ending image frame of the plurality of frame images in which the respective second elements exist according to the ending key frame information of the respective first elements; the processor 901 determines, according to the frame rate of each first element, that each second element exists in an intermediate image frame of the multi-frame image, where the intermediate image frame is located between the starting image frame and the ending image frame.
In one embodiment, the location parameters further include coordinate information;
the processor 901 is specifically configured to perform the following steps when determining the position of each second element in each target image frame according to the position parameter of each first element: and determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames.
In one embodiment, the location parameters further include pose information;
the processor 901 is specifically configured to perform the following steps when determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames: determining the attitude information of each second element in each target image frame according to the attitude information of each first element and the number of the target image frames; the processor 901 generates the multi-frame image according to the position and orientation information of the respective second elements in the respective target image frames.
In one embodiment, the multi-frame image includes the respective second elements and the interaction objects corresponding to the respective second elements, and the positions of the respective second elements in the multi-frame image are the same as the positions of the interaction objects corresponding to the respective second elements in the multi-frame image;
the processor 901 is further configured to perform the following steps after creating the animation composed of the plurality of frames of images: when a trigger event sent by a client is received, searching an animation matched with the trigger event, and sending the animation to the client, wherein the trigger event is generated when the client detects the operation of a user interface; the processor 901 receives an operation event sent by the client, where the operation event is generated when the client monitors the operation on the second element included in any image frame in the animation through an interactive object corresponding to any second element in the process of playing the animation; the processor 901 generates at least one frame of image corresponding to the operation event; the processor 901 sends the at least one frame of image to the client, so that the client displays the at least one frame of image.
In one embodiment, the processor 901 is further configured to perform the following steps before generating the multi-frame image according to the image frame parameter and the position parameter of the at least one first element: generating an interactive object corresponding to each second element in the at least one second element; the processor 901 generates a multi-frame image including the respective second elements and the interactive objects corresponding to the respective second elements according to the image frame parameter and the position parameter of the at least one first element.
In one embodiment, the playing interface for playing the animation and the user interface are displayed in different layers of the client, and the display interface for displaying the at least one frame of image and the playing interface are displayed in different layers of the client.
In one embodiment, the processor 901 is further configured to perform the following steps before generating at least one frame of image corresponding to the operation event: searching a target interactive object from a memory, wherein a second element corresponding to the target interactive object is matched with any third element contained in the at least one frame of image; determining the interactive object corresponding to the third element as the target interactive object; the processor 901 generates at least one frame of image including the third element and the target interactive object according to the operation event, where the position of the third element in the at least one frame of image is the same as the position of the target interactive object in the at least one frame of image.
In one embodiment, the processor 901 is further configured to perform the following steps before searching the target interactive object from the memory: when element indication information sent by a client is received, storing an interactive object corresponding to a target second element indicated by the element indication information into a memory, wherein the element indication information is generated when the client monitors the operation of the target second element contained in any image frame in the animation in the process of playing the animation through the interactive object corresponding to the target second element.
The embodiment of the invention also provides a computer storage medium (Memory), which is a Memory device in the server and is used for storing programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the server, and may also include an extended storage medium supported by the server. The computer storage media provides storage space that stores the operating system of the server. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 901. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
Those of ordinary skill in the art will appreciate that the various illustrative elements and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the invention are all or partially effected when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). A computer storage medium may be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or semiconductor media (e.g., Solid State Disks (SSDs)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of animation creation, the method comprising:
when target animation data are loaded from a preset animation library, extracting animation parameters of at least one first element in the target animation data, wherein the animation parameters comprise image frame parameters and position parameters;
generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element;
and creating an animation consisting of the multi-frame image, wherein the animation comprises at least one second element, and the at least one second element is in one-to-one correspondence with the at least one first element.
2. The method of claim 1, wherein generating a multi-frame image based on the image frame parameter and the position parameter of the at least one first element comprises:
Determining a target image frame of which a second element corresponding to each first element exists in the multi-frame image according to the image frame parameters of each first element in the at least one first element, wherein the target image frame comprises at least two frame images;
determining the position of each second element in each target image frame according to the position parameter of each first element;
and generating the multi-frame image according to the position of each second element in each target image frame.
3. The method of claim 2, wherein the image frame parameters comprise a start key frame information, an end key frame information, and a frame rate, and the target image frames comprise a start image frame, an end image frame, and an intermediate image frame;
the determining, according to the image frame parameter of each first element in the at least one first element, that a second element corresponding to each first element exists in a target image frame in the multi-frame image includes:
determining a starting image frame of each second element in the multi-frame image according to the starting key frame information of each first element in the at least one first element;
Determining an ending image frame of the various second elements in the multi-frame image according to the ending key frame information of the various first elements;
determining that the second elements exist in an intermediate image frame in the multi-frame image according to the frame rate of the first elements, wherein the intermediate image frame is located between the starting image frame and the ending image frame.
4. The method of claim 2, wherein the location parameters include coordinate information;
the determining the position of each second element in each target image frame according to the position parameter of each first element includes:
and determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames.
5. The method of claim 4, wherein the location parameters further include pose information;
the determining the position of each second element in each target image frame according to the coordinate information of each first element and the number of the target image frames further comprises:
determining the attitude information of each second element in each target image frame according to the attitude information of each first element and the number of the target image frames;
Generating the multi-frame image according to the position of each second element in each target image frame, including:
and generating the multi-frame image according to the position and posture information of each second element in each target image frame.
6. The method according to claim 1, wherein the multi-frame image comprises the respective second elements and the interaction objects corresponding to the respective second elements, and the positions of the respective second elements in the multi-frame image are the same as the positions of the interaction objects corresponding to the respective second elements in the multi-frame image;
after the creating of the animation composed of the plurality of frames of images, the method further comprises:
when a trigger event sent by a client is received, searching an animation matched with the trigger event, and sending the animation to the client, wherein the trigger event is generated when the client detects the operation of a user interface;
receiving an operation event sent by the client, wherein the operation event is generated when the client monitors the operation of the second element contained in any image frame in the animation through an interactive object corresponding to any second element in the process of playing the animation;
Generating at least one frame of image corresponding to the operation event;
and sending the at least one frame of image to the client so as to enable the client to display the at least one frame of image.
7. The method according to claim 6, wherein before generating the multi-frame image based on the image frame parameter and the position parameter of the at least one first element, further comprising:
generating an interactive object corresponding to each second element in the at least one second element;
generating a multi-frame image according to the image frame parameter and the position parameter of the at least one first element, wherein the generating comprises the following steps:
and generating a multi-frame image comprising the second elements and the interactive objects corresponding to the second elements according to the image frame parameters and the position parameters of the at least one first element.
8. The method according to claim 6, wherein a playback interface for playing the animation and the user interface are displayed in different layers of the client, and wherein a display interface for displaying the at least one image and the playback interface are displayed in different layers of the client.
9. The method of claim 6, wherein before generating at least one image corresponding to the operation event, further comprising:
Searching a target interactive object from a memory, wherein a second element corresponding to the target interactive object is matched with any third element contained in the at least one frame of image;
determining an interactive object corresponding to the third element as the target interactive object;
the generating at least one frame of image corresponding to the operation event comprises:
generating at least one frame of image comprising the third element and the target interactive object according to the operation event, wherein the position of the third element in the at least one frame of image is the same as the position of the target interactive object in the at least one frame of image.
10. The method of claim 9, wherein prior to retrieving the target interaction object from the memory, the method further comprises:
when element indication information sent by a client is received, storing an interactive object corresponding to a target second element indicated by the element indication information into a memory, wherein the element indication information is generated when the client monitors the operation of the target second element contained in any image frame in the animation in the process of playing the animation through the interactive object corresponding to the target second element.
CN202110353933.9A 2021-03-31 2021-03-31 Animation creating method and device Pending CN114677462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110353933.9A CN114677462A (en) 2021-03-31 2021-03-31 Animation creating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110353933.9A CN114677462A (en) 2021-03-31 2021-03-31 Animation creating method and device

Publications (1)

Publication Number Publication Date
CN114677462A true CN114677462A (en) 2022-06-28

Family

ID=82070463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110353933.9A Pending CN114677462A (en) 2021-03-31 2021-03-31 Animation creating method and device

Country Status (1)

Country Link
CN (1) CN114677462A (en)

Similar Documents

Publication Publication Date Title
CN108965389B (en) Method for displaying information
JP7397094B2 (en) Resource configuration method, resource configuration device, computer equipment, and computer program
US20230215076A1 (en) Image frame display method, apparatus, device, storage medium, and program product
KR20210107164A (en) Interactive information interface
US10922193B2 (en) Data backup method, storage medium, and terminal
CN112532748B (en) Message pushing method, device, equipment, medium and computer program product
CN113117326B (en) Frame rate control method and device
US20220308721A1 (en) Message thread prioritization interface
CN111124412A (en) Game page drawing method, device, equipment and storage medium
WO2019005371A1 (en) Capturing user interactions
CN112887749A (en) Method and device for providing live content preview, electronic equipment and medium
CN109388737B (en) Method and device for sending exposure data of content item and storage medium
WO2017011084A1 (en) System and method for interaction between touch points on a graphical display
CN113810773B (en) Video downloading method and device, electronic equipment and storage medium
EP2997715B1 (en) Transmitting information based on reading speed
CN111054070A (en) Game-based commodity display method and device, terminal and storage medium
CN114510308B (en) Method, device, equipment and medium for storing application page by mobile terminal
CN114677462A (en) Animation creating method and device
CN114143590A (en) Video playing method, server and storage medium
CN113436604A (en) Method and device for broadcasting content, electronic equipment and storage medium
CN113127783A (en) Page display method and device, equipment and medium
CN113384893A (en) Data processing method and device and computer readable storage medium
CN113382310B (en) Information recommendation method and device, electronic equipment and medium
CN114915850B (en) Video playing control method and device, electronic equipment and storage medium
CN112286419B (en) Method, device, terminal and storage medium for loading list data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination