CN112270734A - Animation generation method, readable storage medium and electronic device - Google Patents

Animation generation method, readable storage medium and electronic device Download PDF

Info

Publication number
CN112270734A
CN112270734A CN202011119346.5A CN202011119346A CN112270734A CN 112270734 A CN112270734 A CN 112270734A CN 202011119346 A CN202011119346 A CN 202011119346A CN 112270734 A CN112270734 A CN 112270734A
Authority
CN
China
Prior art keywords
bone
information
skeleton
animation
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011119346.5A
Other languages
Chinese (zh)
Other versions
CN112270734B (en
Inventor
程驰
周佳
包英泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dami Technology Co Ltd
Original Assignee
Beijing Dami Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dami Technology Co Ltd filed Critical Beijing Dami Technology Co Ltd
Priority to CN202011119346.5A priority Critical patent/CN112270734B/en
Publication of CN112270734A publication Critical patent/CN112270734A/en
Application granted granted Critical
Publication of CN112270734B publication Critical patent/CN112270734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Abstract

The invention discloses an animation generation method, a storage medium and an electronic device. The technical scheme of the embodiment of the invention records the video including the movement of the person or the animal in advance, takes the video as the target video, and extracts the image set of the target video through the extraction software. The method comprises the steps of taking a trained neural network as a target detection network, taking an extracted image set as a detection set, identifying key point prediction information capable of positioning the motion of a human body or an animal body, obtaining key point correction information through key prediction information at two continuous moments, obtaining skeleton information according to the key point correction information, and obtaining a skeleton state change matrix according to state changes of the skeleton information at two different moments. And binding the skeleton and the vertex of the animation image. And obtaining the initial vertex position of the animation image, updating the vertex of the animation image by using the skeleton state change matrix according to the binding relationship, and obtaining an animation frame. Therefore, the plurality of animation frames are combined into the animation, so that the production cost is saved, and the production period is shortened.

Description

Animation generation method, readable storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to an animation generation method, a readable storage medium and electronic equipment.
Background
In the field of online education, a large number of animation videos need to be made. The animation method in the prior art usually adjusts the animation image (the animation image can be a human object or an animal) frame by frame under a specific scene. The method for making the animation video has the advantages of large workload, extremely high labor cost and long making period, and is difficult to meet the requirements of the online education field on the animation video.
Disclosure of Invention
In view of this, in order to solve the problems of long production cycle and high cost of artificially rendered animation videos, embodiments of the present invention provide an animation generation method, a readable storage medium, and an electronic device.
In a first aspect, an embodiment of the present invention provides an animation generation method, including:
identifying a target video based on a target detection model, acquiring prediction information of key points in each key frame of the target video, and generating a prediction information sequence for each key point;
calculating correction information of the corresponding key point at the current moment according to the prediction information of the current frame and the previous frame in the prediction information sequence;
obtaining bone information through the correction information of two key points with an incidence relation, wherein the bone information comprises bone identification and bone position information, and the bone position information is obtained through the correction information of the two key points;
generating a skeleton state change matrix according to the change of the skeleton position information at the current moment and the previous moment;
and calculating the space position of the component vertex of the updated animation image according to the binding relationship between the bone identifier and the component vertex and each bone state change matrix, and taking the updated animation image as an animation frame in the animation video.
Preferably, the calculating the spatial positions of the component vertices of the updated animated figure according to the bone state change matrices includes:
acquiring the spatial position of the part vertex of the animation image at the current moment;
acquiring influence weight of each skeleton state change matrix on the top point of the part through each skeleton position information;
and calculating the influence of each skeleton state change matrix on the component vertex one by one to obtain an influenced component vertex space position set, and performing weighted summation on each element in the influenced component vertex space position set to obtain the updated space position of the component vertex of the animation image at the next moment.
Preferably, the bone state change matrix comprises:
the displacement vectors of the skeleton at the current moment and the last moment;
the rotation vectors of the skeleton at the current moment and the last moment; and
a scaling vector of the skeleton at a current time and a previous time.
Preferably, the key points are used to characterize a skeletal location reference point of the human or animal body in motion.
Preferably, the bone location information includes: bone location and bone confidence.
Preferably, the bone position information is obtained by the correction information of the two key points, including:
acquiring correction information of a first key point and acquiring correction information of a second key point; the correction information of the key point comprises: key point position information and key point confidence; the key point position information is used for representing coordinates of the key points, and the key point confidence coefficient is used for the criterion degree of the key point position information.
Calculating bone position information according to the correction information of the first key point and the correction information of the second key point, wherein the bone position information comprises: bone location and bone confidence; wherein the bone position is obtained by calculating a weighted sum of first keypoint location information and second keypoint location information; wherein the influence weight of the first key point on the bone position is the confidence coefficient of the first key point; the influence weight of the second key point on the bone position is the confidence coefficient of the second key point; the bone confidence is obtained by processing the confidence of the first key point and the confidence of the second key point through a preset rule.
Preferably, the bone identification-component vertex binding relationship is obtained through a predetermined mode.
Preferably, the method further comprises:
and acquiring the binding relation between the bone identification and the component vertex through calculation.
Preferably, the method for obtaining the bone identification-component vertex binding relationship through calculation comprises the following steps:
calculating the distance between the component vertex of the animation image and each skeleton, binding the component vertex of the animation image to the skeleton with the closest distance, and obtaining the influence weight of each skeleton on the component vertex of the animation image through a normalization method.
In a second aspect, an embodiment of the present invention provides a storage medium, including: for storing computer program instructions which, when executed by a processor, implement the method of any of the above.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, wherein the memory is configured to store one or more computer program instructions, and wherein the one or more computer program instructions are executed by the processor to perform the method according to any one of the above.
The technical scheme of the embodiment of the invention records the video including the movement of the person or the animal in advance, takes the video as the target video, and extracts the image set of the target video through the extraction software. The method comprises the steps of taking a trained neural network as a target detection network, taking an extracted image set as a detection set, identifying key point prediction information capable of positioning the motion of a human body or an animal body, obtaining key point correction information through key prediction information at two continuous moments, obtaining skeleton information according to the key point correction information, and obtaining a skeleton state change matrix according to state changes of the skeleton information at two different moments. And binding the skeleton and the vertex of the animation image. And obtaining the initial vertex position of the animation image, updating the vertex of the animation image by using the skeleton state change matrix according to the binding relationship, and obtaining an animation frame. Therefore, a plurality of animation frames are combined into the animation, the animation production cost is saved, and the animation production period is shortened.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of an animation generation method according to an embodiment of the invention;
FIG. 2 is a key frame diagram of an embodiment of the present invention;
FIG. 3 is a schematic illustration of skeletal information in accordance with an embodiment of the present invention;
FIG. 4 is a diagram illustrating updating of vertices of an animated character according to an embodiment of the present invention;
FIG. 5 is a flow chart of a method for computing a bone identification-component vertex binding relationship according to an embodiment of the present invention;
FIG. 6 is a flow chart of obtaining an updated animated figure according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an electronic device embodying the present invention.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the field of online education, a large number of animation videos need to be made. In order to improve the animation drawing efficiency and reduce the labor cost, the embodiment of the invention provides an animation generation method.
Fig. 1 is a flowchart of an animation generation method according to an embodiment of the present invention, and referring to fig. 1, the animation generation method according to the embodiment of the present invention includes the following steps:
step 100, identifying a target video based on a target detection model, obtaining prediction information of key points in each key frame of the target video, and generating a prediction information sequence for each key point.
The target detection model is obtained by training a neural network, and the target detection model is a trained neural network model.
The target video is a pre-recorded video or a given video, which includes a moving human or animal body.
The key frame is an image file having a time stamp, and a plurality of key frames can be obtained by extracting an image stream of the target video. Wherein, each key frame comprises a human body or an animal body.
In this embodiment, the prediction information of the keypoint includes position information and confidence, and is an output result of the target detection model. Specifically, the position information may be represented by plane or spatial coordinates, and the confidence is the confidence of the position information.
Specifically referring to fig. 2, a key frame 1 is an image file extracted through a target video, a key frame 2 is an image file extracted through the target video, and in order to identify the motion of a moving human body or animal body, in the embodiment of the present invention, each key frame is sent to a target detection model as a detection set, and it is expected that prediction information of a key point of the moving human body or animal body is obtained through the target detection model.
In an alternative implementation, the key frames are obtained from the target video. Specifically, the target video is extracted to obtain each key frame of the target video. For example, each key frame in the target video is extracted by video extraction software. Key frame 1 (time tag T1) is the image file extracted at time tag T1; key frame 2 (time tag T2) is the image file extracted at time tag T2.
In order to recognize the motion of the moving human or animal body in the key frame, first, prediction information of key points of the moving human or animal body is acquired through the key frame.
In an alternative implementation, each keyframe is input into the target detection model as a test set to identify the prediction information for the keypoint. That is, a set of keyframes is used as a test set, the test set is input to a trained neural network (target detection model), and prediction information of keypoints of a moving human or animal body is output through the trained neural network (target detection model). The prediction information of the key points comprises position information and confidence, the position information of the key points can position the position of an important reference point on the human body or the animal body, namely, the key points are connected according to a binding relationship, and the action of the human body or the animal body can be represented; the keypoint confidence is the confidence level of the keypoint location output by the neural network. For example, the key points of the positioning human body action in the key frame 1 are identified by the target detection model: keypoint 1, keypoint 2, … …, etc., identified by the target detection model, which are represented by keypoint prediction information. In the same way, the key points for locating human body actions are identified in the key frame 2: keypoint 1, keypoint 2, … …, etc.
Referring to fig. 3, it can be seen that the motion of the person in the key frame 2 is changed compared to the motion of the person in the key frame 1. Correspondingly, the prediction information of the key point 1 and the key point 2 in the key frame 2 identified by the target detection model is different from the prediction information of the corresponding key point in the key frame 1. That is to say, the prediction information of the same key point in two consecutive key frames changes, so that the position of the limb or trunk of the human body can be represented, and the change of the position of the limb or trunk can further represent the change of the motion of the human body.
And forming a detection set by each key frame of the target video, and sending the detection set into a target detection model, thereby obtaining the prediction information of each key point of the human body in each key frame. A series of operations can be expressed by generating a sequence of predicted information of the key points (a sequence of predicted information of the key points).
Then, the time labels in the key frames are obtained, and the prediction information sequence of key points on the key frames is obtained. For example, the key point prediction information of the key point 1 in the key frame 1 is acquired, the key point prediction information … … of the key point 1 in the key frame 2 is acquired, and the prediction information sequence of the key point 1 is generated according to the time tag of each key frame; in the same way, the predicted information sequence of the key point 2 is obtained. And analogizing in turn to obtain the prediction information sequence of each key point.
That is, a series of actions of the human body or the animal body can be represented by obtaining the prediction information sequence of each key point through the target detection model.
In order to obtain more accurate human body actions, the key prediction information needs to be corrected, that is, actions represented by the prediction information of each key point obtained by the target detection model are different from actual actions of the human body in the key frame, and more accurate actions can be obtained by correcting the prediction information. See step 200 for details.
Step 200, calculating the correction information of the corresponding key point at the current moment according to the prediction information of the current frame and the previous frame in the prediction information sequence.
With continued reference to FIG. 3, in an alternative implementation, the image file at time T2 is taken as the current frame, and the image file at time T1 is taken as the previous frame. Prediction information of each key point at the time of T2 is acquired, and prediction information of each key point at the time of T1 is acquired.
In order to correct the information of the key points, correction information of the key points is obtained through prediction information of the key points of the current frame and the previous frame. The correction information of the key point includes: corrected position information and corrected confidence.
Optionally, the method for calculating the corrected location information of the key point 1 at the time T2 is specifically: the position information of the predicted information of the keypoint 1 at the time T1 is acquired, the position information of the predicted information of the keypoint 1 at the time T2 is acquired, weighted summation (weight is confidence) is performed, and the sum value is used as the corrected position information. Therefore, by the method, the corrected position information of the key point corresponding to the current key frame at the current moment is obtained.
The method for calculating the corrected confidence level of the keypoint 1 at the time point T2 is as follows: obtaining confidence s0 of the prediction information of the keypoint 1 at the time T1, obtaining confidence s1 of the prediction information of the keypoint 1 at the time T2, wherein the corrected confidence s of the keypoint 1 at the time T2 is as follows:
Figure BDA0002731469260000071
therefore, the corrected confidence of the corresponding key point of the current key frame at the current moment can be obtained through the method.
By the method, the correction information of the key point at the current time can be calculated. Correction information of the key point 1 and the key point 2 … … at the current time is calculated in sequence. Similarly, by adjusting the relation of the time stamps, correction information of the key point at each time such as the previous time, the next time … …, and the like can be calculated.
Step 300, obtaining skeleton information through the correction information of two key points with an association relationship, wherein the skeleton information comprises skeleton identification and skeleton position information, and the skeleton position information is obtained through the correction information of the two key points.
Through the binding relationship between the key points, the bone information can be determined. Wherein the bone information comprises bone identification and bone location information. With continued reference to fig. 3, bone information can be determined by the binding relationship of key point 1 and key point 2, wherein bone 1 represents a bone identifier; bone position information can also be obtained by the correction information of key point 1 and key 2. According to the binding relationship of different key points, a plurality of pieces of skeleton information can be obtained.
The bone location information includes a bone location and a bone confidence. The bone position is obtained by computing a weighted sum of the first keypoint corrected position information and the second keypoint corrected position information. Wherein the weight of the effect of the first keypoint on the bone position is the revised confidence of the first keypoint; the weight of the impact of the second keypoint on the bone location is the revised confidence of the second keypoint. For example: the bone position of the bone 1 at the time T2 is calculated by the following method:
corrected location information for the first keypoint at time T2T 2 corrected confidence for the first keypoint at time T2 corrected location information for the second keypoint at time T2 corrected confidence for the second keypoint.
The skeletal confidence may then be obtained by processing the revised confidence of the first keypoint and the revised confidence of the second keypoint.
Specifically, a corrected confidence p0 of the first keypoint at the time T2 is obtained, a corrected confidence p1 of the second keypoint at the time T2 is obtained, and the confidence p of the bone 1 at the time T2 is:
Figure BDA0002731469260000081
thus, by performing the above-described step 300 for each bone, bone position information of all bones of the human or animal body can be obtained.
And 400, generating a bone state change matrix according to the change of the bone position information at the current moment and the previous moment.
The bone state change matrix comprises a displacement vector t of the bone at the current time and the previous time, a rotation vector r of the bone at the current time and the previous time, and a scaling vector z of the bone at the current time and the previous time.
Referring to fig. 4, specifically, a bone state change matrix can be calculated by obtaining bone positions of the same bone at different times.
For example: the time T2 is the current time, and the time T1 is the previous time. Obtaining the bone position of the bone 1 at the time of T1, obtaining the bone position of the bone 1 at the time of T2:
firstly, obtaining a displacement vector T of the bone at the current time T2 and the last time T1 by calculating the displacement change of the bone 1 from the time T1 to the time T2;
secondly, obtaining a rotation vector r of the bone at the current time T2 and the last time T1 by calculating the rotation change of the bone 1 from the time T1 to the time T2;
thirdly, a scaling vector z of the bone at the current time T2 and the previous time T1 is obtained by calculating the scaling change of the bone 1 from the time T1 to the time T2.
The vectors form a skeletal state change matrix.
The animation image component vertexes can be updated through the skeleton state change matrix, that is, the animation image is updated according to the state change of the skeleton information of the moving human body or animal body. See step 500 for details.
And 500, calculating the space position of the component vertex of the updated animation image according to the binding relationship between the bone identifier and the component vertex and each bone state change matrix, and taking the updated animation image as an animation frame in the animation video.
With particular reference to FIG. 5, the part vertices are the anchor points of the parts of the animated figure, which may be the anchor points of limbs, torso, and face. The change of the motion of the human body can be represented by the change of each bone by taking the time T2 as the current time and the time T1 as the previous time. For example, from time T1 to time T2, the parameters of bone 1 and bone 2 have changed, and the parameter of bone 3 has not changed. Bones 1 and 2 represent the arms of the human body and bones 3 represent the torso of the human body. In order to update the animation image according to the skeleton change of the human body, the skeleton mark-component vertex is bound.
In an alternative implementation, the binding relationship between the bone identifier and the component vertex is obtained through a predetermined method, and specifically, the bone identifier may be bound with an order of the animation image by a designer. For example, binding bone 1 with vertex 1 and bone 2 with vertex 2, those skilled in the art will appreciate that the binding relationship of the bone identifier to the component vertex is not a one-to-one correspondence.
In an alternative implementation, the bone identifier-component vertex binding relationship is obtained through a calculation method, which includes three substeps, see fig. 5 in particular.
Step 501, calculating the distance between the part vertex of the animation image and each bone.
In an alternative implementation, the distances between the top points of the components of the animated figure and the bones at the same time are obtained.
Step 502, binding the component vertex of the animated figure to the bone closest to the component vertex.
In an alternative implementation, the component vertices of the animated figure are bound to the closest said skeleton.
And 503, obtaining the influence weight of each skeleton on the component vertex of the animation image through a normalization method.
In an alternative implementation, the distance D from the bone to the component vertex is calculated with the weight of the bone's impact on the component vertex being 1/D2For each part vertex, the sum of the influence weights of all bones on the part vertex is normalized to 1, and the influence weight of each bone on the part vertex is obtained.
And then, calculating the space position of the vertex of the updated animation image according to the binding relationship between the skeleton mark and the vertex of the component.
Referring specifically to fig. 6, the spatial position of the updated animated character vertices is calculated by the following three sub-steps.
Step 510, obtaining the spatial position of the component vertex of the animated figure at the current time.
And acquiring the spatial positions of the vertexes of all the parts of the animation image at the current moment. Specifically, the animation frame 1 is an animated character of the animated character at a time T2 (time tag T2), and the motion and expression of the animated character are determined by the component vertices of the animated character. In connection with fig. 5, for example, vertex 1 and vertex 2 determine the specific state of a certain limb of the animated character in the animation frame, and similarly, the states of the limb, torso and face of the animated character may be determined by the component vertices of each animated character. In order to calculate the animation frame 2 (time stamp T3) at the next time, the spatial positions of the vertices in the animation frame 1 at the current time need to be determined in advance. In an alternative implementation, the spatial location is represented by spatial coordinates.
And step 520, acquiring influence weight of each bone state change matrix on the top point of the part through each bone position information.
The example of the binding relationship between the skeleton 1, the skeleton 2, and the skeleton 3 and the vertex 1 is described in detail.
First, each bone state change matrix { R1, R2, R3} from time T1 to time T2 is obtained. For example, the bone change matrix R1 of the bone 1 from the time T1 to the time T2 can be obtained by calculation. Similarly, a bone change matrix R2 from the time T1 to the time T2 of the bone 2 can be obtained by calculation, and a bone change matrix R3 from the time T1 to the time T2 of the bone 3 can be obtained by calculation.
Next, the weight { P1, P2, P3} of the influence of each state change matrix of bone 1, bone 2, and bone 3 on vertex 1 is obtained. Specifically, the bone confidence P1 of the bone 1 at the time T2 is used as the influence weight P1 of the bone change matrix R1 on the vertex 1, the bone confidence P2 of the bone 2 at the time T2 is used as the influence weight P2 of the bone change matrix R2 on the vertex 1, and the bone confidence P3 of the bone 3 at the time T2 is used as the influence weight P3 of the bone change matrix R3 on the vertex 1.
And respectively obtaining the influence weight of each skeleton state change matrix on the top point of the component with the binding relation according to each skeleton position information.
And 530, calculating the influence of each skeleton state change matrix on the top point of the part one by one to obtain a space position set of the top point of the influenced part, and performing weighted summation on each element in the space position set of the top point of the influenced part to obtain the updated space position of the top point of the part of the animation image at the next moment.
In order to update the animation image of the animation frame 1, the influence of each bone state change matrix on the component vertex needs to be calculated one by one, and a set of the vertex space positions of the influenced components is obtained.
The details of the binding relationship between the skeleton 1, the skeleton 2, and the skeleton 3 and the vertex 1 are described as examples.
Acquiring a spatial position W1 of the vertex 1 at a time T2, updating the spatial position of the vertex 1 by using a bone change matrix R1, acquiring a component vertex spatial position W11 after the vertex 1 is affected as { X11, Y11 and Z11}, updating the spatial position of the vertex 1 by using the bone change matrix R2, acquiring a component vertex spatial position W12 after the vertex 1 is affected as { X12, Y12 and Z12}, and updating the spatial position of the vertex 1 by using the bone change matrix R3 to acquire a component vertex spatial position W13 after the vertex 1 is affected as { X13, Y13 and Z13 }. W11, W12, and W13 are taken as a set of spatial positions { W11, W12, W13 }.
Then, W11, W12 and W13 are weighted and summed to obtain an updated spatial position W2 of the vertex 1 of the animated image at the time T3, with the bone confidence P1 of the bone 1 at the time T2 as the weight of the affected component vertex spatial position W11, with the bone confidence P2 of the bone 2 at the time T2 as the weight of the affected component vertex spatial position W12, and with the bone confidence P3 of the bone 3 at the time T2 as the weight of the affected component vertex spatial position W13.
By the similar method, the spatial position of each vertex at the time T3 is calculated, thereby obtaining the animation frame at the time T3, that is, the updated animation character is taken as the animation frame in the animation video.
The technical scheme of the embodiment of the invention records the video including the movement of the person or the animal in advance, takes the video as the target video, and extracts the image set of the target video through the extraction software. The method comprises the steps of taking a trained neural network as a target detection network, taking an extracted image set as a detection set, identifying key point prediction information capable of positioning the motion of a human body or an animal body, obtaining key point correction information through key prediction information at two continuous moments, obtaining skeleton information according to the key point correction information, and obtaining a skeleton state change matrix according to state changes of the skeleton information at two different moments. And binding the skeleton and the vertex of the animation image. And obtaining the initial vertex position of the animation image, updating the vertex of the animation image by using the skeleton state change matrix according to the binding relationship, and obtaining an animation frame. Therefore, a plurality of animation frames are combined into the animation, the animation production cost is saved, and the animation production period is shortened.
Fig. 7 is a schematic diagram of an electronic device of an embodiment of the invention.
The electronic device 7 as shown in fig. 7 comprises a general hardware structure comprising at least a processor 71 and a memory 72. The processor 71 and the memory 72 are connected by a bus 73. The memory 72 is adapted to store instructions or programs executable by the processor 71. The processor 71 may be a stand-alone microprocessor or a collection of one or more microprocessors. Thus, the processor 71 implements the processing of data and the control of other devices by executing instructions stored by the memory 72 to perform the method flows of embodiments of the present invention as described above. The bus 73 connects the above-described components together, and also connects the above-described components to a display controller 74 and a display device and an input/output (I/O) device 75. Input/output (I/O) devices 75 may be a mouse, keyboard, modem, network interface, touch input device, motion sensing input device, printer, and other devices known in the art. Typically, the input/output devices 75 are connected to the system through input/output (I/O) controllers 76.
As will be appreciated by one skilled in the art, embodiments of the present application may provide a method, apparatus (device) or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may employ a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow in the flow diagrams can be implemented by computer program instructions.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
Another embodiment of the invention relates to a non-transitory readable storage medium storing a computer-readable program for causing a computer to perform an embodiment of some or all of the above methods.
That is, as will be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be accomplished by specifying the relevant hardware through a program, where the program is stored in a readable storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned readable storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of animation generation, the method comprising:
identifying a target video based on a target detection model, acquiring prediction information of key points in each key frame of the target video, and generating a prediction information sequence for each key point;
calculating correction information of the corresponding key point at the current moment according to the prediction information of the current frame and the previous frame in the prediction information sequence;
obtaining bone information through the correction information of two key points with an incidence relation, wherein the bone information comprises bone identification and bone position information, and the bone position information is obtained through the correction information of the two key points;
generating a skeleton state change matrix according to the change of the skeleton position information at the current moment and the previous moment;
and calculating the space position of the component vertex of the updated animation image according to the binding relationship between the bone identifier and the component vertex and each bone state change matrix, and taking the updated animation image as an animation frame in the animation video.
2. The method of claim 1, wherein said computing the spatial locations of the component vertices of the updated animated figure from each of the skeletal state change matrices comprises:
acquiring the spatial position of the part vertex of the animation image at the current moment;
acquiring influence weight of each skeleton state change matrix on the top point of the part through each skeleton position information;
and calculating the influence of each skeleton state change matrix on the component vertex one by one to obtain an influenced component vertex space position set, and performing weighted summation on each element in the influenced component vertex space position set to obtain the updated space position of the component vertex of the animation image at the next moment.
3. The method of claim 1, wherein the bone state change matrix comprises:
the displacement vectors of the skeleton at the current moment and the last moment;
the rotation vectors of the skeleton at the current moment and the last moment; and
a scaling vector of the skeleton at a current time and a previous time.
4. The method of claim 1, wherein the keypoints are used to characterize a skeletal positioning reference point of a human or animal body in motion.
5. The method of claim 1, wherein the bone location information comprises: bone location and bone confidence.
6. The method of claim 1, wherein the bone location information is obtained from the revised information of the two key points, comprising:
acquiring correction information of a first key point and acquiring correction information of a second key point; the correction information of the key point comprises: corrected position information and corrected confidence;
calculating bone position information according to the correction information of the first key point and the correction information of the second key point, wherein the bone position information comprises: bone location and bone confidence; wherein the bone position is obtained by calculating a weighted sum of the first keypoint corrected position information and the second keypoint corrected position information; weighting the influence of the first keypoint on the bone position by the revised confidence of the first keypoint; weighting the influence of the second keypoint on the bone position by the revised confidence of the second keypoint; the bone confidence is obtained by processing the corrected confidence of the first key point and the corrected confidence of the second key point through a preset rule.
7. The method of claim 1, wherein the bone identification-component vertex binding relationship is obtained in a predetermined manner.
The method of claim 1, further comprising:
and acquiring the binding relation between the bone identification and the component vertex through calculation.
8. The method according to claim 7, wherein the method for obtaining the bone identification-component vertex binding relationship through calculation comprises:
calculating the distance between the component vertex of the animation image and each skeleton, binding the component vertex of the animation image to the skeleton with the closest distance, and obtaining the influence weight of each skeleton on the component vertex of the animation image through a normalization method.
9. A computer readable storage medium storing computer program instructions, which when executed by a processor implement the method of any one of claims 1-8.
10. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-8.
CN202011119346.5A 2020-10-19 2020-10-19 Animation generation method, readable storage medium and electronic equipment Active CN112270734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011119346.5A CN112270734B (en) 2020-10-19 2020-10-19 Animation generation method, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011119346.5A CN112270734B (en) 2020-10-19 2020-10-19 Animation generation method, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112270734A true CN112270734A (en) 2021-01-26
CN112270734B CN112270734B (en) 2024-01-26

Family

ID=74338931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011119346.5A Active CN112270734B (en) 2020-10-19 2020-10-19 Animation generation method, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112270734B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112891922A (en) * 2021-03-18 2021-06-04 山东梦幻视界智能科技有限公司 Virtual reality somatosensory interaction method
CN112954235A (en) * 2021-02-04 2021-06-11 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN112967362A (en) * 2021-03-19 2021-06-15 北京有竹居网络技术有限公司 Animation generation method and device, storage medium and electronic equipment
CN114638921A (en) * 2022-05-19 2022-06-17 深圳元象信息科技有限公司 Motion capture method, terminal device, and storage medium
CN115529500A (en) * 2022-09-20 2022-12-27 中国电信股份有限公司 Method and device for generating dynamic image
CN115690267A (en) * 2022-12-29 2023-02-03 腾讯科技(深圳)有限公司 Animation processing method, device, equipment, storage medium and product
WO2023207477A1 (en) * 2022-04-27 2023-11-02 腾讯科技(深圳)有限公司 Animation data repair method and apparatus, device, storage medium, and program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021584A (en) * 2014-06-25 2014-09-03 无锡梵天信息技术股份有限公司 Implementation method of skinned skeletal animation
WO2018050001A1 (en) * 2016-09-14 2018-03-22 厦门幻世网络科技有限公司 Method and device for generating animation data
CN107967693A (en) * 2017-12-01 2018-04-27 北京奇虎科技有限公司 Video Key point processing method, device, computing device and computer-readable storage medium
CN108121952A (en) * 2017-12-12 2018-06-05 北京小米移动软件有限公司 Face key independent positioning method, device, equipment and storage medium
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium
US20190206145A1 (en) * 2016-11-24 2019-07-04 Tencent Technology (Shenzhen) Company Limited Image synthesis method, device and matching implementation method and device
CN110175061A (en) * 2019-05-20 2019-08-27 北京大米科技有限公司 Exchange method, device and electronic equipment based on animation
JP2020086793A (en) * 2018-11-21 2020-06-04 株式会社ドワンゴ Data correction device and program
CN111753801A (en) * 2020-07-02 2020-10-09 上海万面智能科技有限公司 Human body posture tracking and animation generation method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021584A (en) * 2014-06-25 2014-09-03 无锡梵天信息技术股份有限公司 Implementation method of skinned skeletal animation
WO2018050001A1 (en) * 2016-09-14 2018-03-22 厦门幻世网络科技有限公司 Method and device for generating animation data
US20190206145A1 (en) * 2016-11-24 2019-07-04 Tencent Technology (Shenzhen) Company Limited Image synthesis method, device and matching implementation method and device
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium
CN107967693A (en) * 2017-12-01 2018-04-27 北京奇虎科技有限公司 Video Key point processing method, device, computing device and computer-readable storage medium
CN108121952A (en) * 2017-12-12 2018-06-05 北京小米移动软件有限公司 Face key independent positioning method, device, equipment and storage medium
JP2020086793A (en) * 2018-11-21 2020-06-04 株式会社ドワンゴ Data correction device and program
CN110175061A (en) * 2019-05-20 2019-08-27 北京大米科技有限公司 Exchange method, device and electronic equipment based on animation
CN111753801A (en) * 2020-07-02 2020-10-09 上海万面智能科技有限公司 Human body posture tracking and animation generation method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHENG CHENG等: "Adaptive Animation Design Method for Virtual Environments", 《 PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS THEORY AND APPLICATIONS》, pages 356 - 361 *
XINYI ZHANG等: "Data-driven autocompletion for keyframe animation. In Proceedings of the 11th ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG \'18)", 《ASSOCIATION FOR COMPUTING MACHINERY》, pages 10 *
周佳等: "基于手势数据分析的地方手语动画合成方法研究", 《计算机应用研究》, vol. 28, no. 2, pages 779 - 781 *
罗丁力: "基于深度学习的实时姿态识别与人物动画生成", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2020, pages 138 - 927 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954235A (en) * 2021-02-04 2021-06-11 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN112954235B (en) * 2021-02-04 2021-10-29 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN112891922A (en) * 2021-03-18 2021-06-04 山东梦幻视界智能科技有限公司 Virtual reality somatosensory interaction method
CN112967362A (en) * 2021-03-19 2021-06-15 北京有竹居网络技术有限公司 Animation generation method and device, storage medium and electronic equipment
WO2023207477A1 (en) * 2022-04-27 2023-11-02 腾讯科技(深圳)有限公司 Animation data repair method and apparatus, device, storage medium, and program product
CN114638921A (en) * 2022-05-19 2022-06-17 深圳元象信息科技有限公司 Motion capture method, terminal device, and storage medium
CN114638921B (en) * 2022-05-19 2022-09-27 深圳元象信息科技有限公司 Motion capture method, terminal device, and storage medium
CN115529500A (en) * 2022-09-20 2022-12-27 中国电信股份有限公司 Method and device for generating dynamic image
CN115690267A (en) * 2022-12-29 2023-02-03 腾讯科技(深圳)有限公司 Animation processing method, device, equipment, storage medium and product

Also Published As

Publication number Publication date
CN112270734B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN112270734B (en) Animation generation method, readable storage medium and electronic equipment
JP6082101B2 (en) Body motion scoring device, dance scoring device, karaoke device, and game device
CN110827383B (en) Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment
CN110675475B (en) Face model generation method, device, equipment and storage medium
CN106600626B (en) Three-dimensional human motion capture method and system
Magnenat-Thalmann Modeling and simulating bodies and garments
CN109063584B (en) Facial feature point positioning method, device, equipment and medium based on cascade regression
CN104978764A (en) Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment
CN113435431B (en) Posture detection method, training device and training equipment of neural network model
CN105243375B (en) A kind of motion characteristic extracting method and device
JP7235133B2 (en) Exercise recognition method, exercise recognition program, and information processing apparatus
CN110147737B (en) Method, apparatus, device and storage medium for generating video
CN111127668B (en) Character model generation method and device, electronic equipment and storage medium
CN110570500B (en) Character drawing method, device, equipment and computer readable storage medium
KR20120038616A (en) Method and system for providing marker-less immersive augmented reality
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN107862387B (en) Method and apparatus for training supervised machine learning models
CN115601482A (en) Digital human action control method and device, equipment, medium and product thereof
CN109858402B (en) Image detection method, device, terminal and storage medium
CN114677572A (en) Object description parameter generation method and deep learning model training method
CN113658324A (en) Image processing method and related equipment, migration network training method and related equipment
JP7409390B2 (en) Movement recognition method, movement recognition program and information processing device
CN109598201B (en) Action detection method and device, electronic equipment and readable storage medium
CN116580151A (en) Human body three-dimensional model construction method, electronic equipment and storage medium
CN116433808A (en) Character animation generation method, animation generation model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant