CN116433808A - Character animation generation method, animation generation model training method and device - Google Patents

Character animation generation method, animation generation model training method and device Download PDF

Info

Publication number
CN116433808A
CN116433808A CN202111661942.0A CN202111661942A CN116433808A CN 116433808 A CN116433808 A CN 116433808A CN 202111661942 A CN202111661942 A CN 202111661942A CN 116433808 A CN116433808 A CN 116433808A
Authority
CN
China
Prior art keywords
animation
information
key frame
dimension
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111661942.0A
Other languages
Chinese (zh)
Inventor
施一东
赵男
胡婷婷
包炎
刘超
李鑫培
师锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Miha Youliyue Technology Co ltd
Original Assignee
Shanghai Miha Youliyue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Miha Youliyue Technology Co ltd filed Critical Shanghai Miha Youliyue Technology Co ltd
Priority to CN202111661942.0A priority Critical patent/CN116433808A/en
Publication of CN116433808A publication Critical patent/CN116433808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the field of virtual reality, and particularly discloses a method and a device for generating character animation and training an animation generation model, wherein the method comprises the following steps: responding to a received role configuration request, and extracting configuration parameters contained in the role configuration request; obtaining animation feature information matched with the configuration parameters, and generating animation data corresponding to the animation feature information; and driving the role model of the target role through the animation data to obtain the role animation of the target role. Therefore, the method can automatically generate the character animation matched with the configuration parameters based on the association relation between the configuration parameters and the animation characteristic information, and greatly improves the generation efficiency of animation data.

Description

Character animation generation method, animation generation model training method and device
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to a character animation generation method, an animation generation model training method and a device.
Background
In the field of animation technology, it is necessary to obtain animation data of various animation images to obtain corresponding animation through rendering. In the prior art, it is generally required that an animator provide animation data of an animated character to make the formed animated character more realistic.
However, since different characters have different character characteristics and behavior characteristics, the animation data of the different characters should also be of different styles, and if the animation data of each character is simply generated by an animator, a lot of time and cost are required. How to more efficiently generate character animation with different wind patterns is a problem to be solved in the technical field of animation.
Disclosure of Invention
In view of the foregoing, the present invention has been made to provide a method and apparatus for generating a character animation, training an animation generation model, which overcomes or at least partially solves the foregoing problems.
According to one aspect of the present invention, there is provided a character animation generation method, including:
responding to a received role configuration request, and extracting configuration parameters contained in the role configuration request;
obtaining animation feature information matched with the configuration parameters, and generating animation data corresponding to the animation feature information;
and driving the role model of the target role through the animation data to obtain the role animation of the target role.
Optionally, the obtaining the animation feature information matched with the configuration parameter, and generating the animation data corresponding to the animation feature information includes:
Inputting the configuration parameters into an animation generation model, and determining animation characteristic information matched with the configuration parameters through the animation generation model so as to generate animation data corresponding to the animation characteristic information.
Optionally, the character animation is a bone skinning animation, and the bone skinning animation includes bone nodes corresponding to each limb part, and the animation feature information includes at least one of the following:
the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
Optionally, the configuration parameters include at least one dimension parameter, and each dimension parameter includes at least one dimension subparameter; wherein the dimensional parameters include at least one of: a tag dimension parameter, an attribute dimension parameter, and a scene dimension parameter;
and, in the case that the dimension parameters included in the character configuration request are plural, the acquiring the animation feature information matched with the configuration parameters includes:
and respectively determining dimension characteristic information corresponding to each dimension parameter, and carrying out fusion processing on the plurality of dimension characteristic information to obtain the animation characteristic information.
Optionally, the role configuration request further includes: parameter weight values corresponding to each dimension parameter and input by a weight configuration inlet in a configuration interface;
the determining dimension characteristic information corresponding to each dimension parameter includes: and determining dimension characteristic information corresponding to each dimension parameter according to the parameter weight value of the dimension parameter.
Optionally, the tag dimension parameter includes at least one dimension subparameter of: force quantum parameters, speed subparameters, proficiency subparameters, and aggressiveness subparameters;
the attribute dimension parameters include at least one dimension subparameter of: feature subparameter, character subparameter, hobby subparameter, occupation subparameter, and region subparameter;
the scene dimension parameters include at least one dimension subparameter of: an object attribute sub-parameter corresponding to the interactive object, a media attribute sub-parameter for describing the fluid media, and an apparel attribute sub-parameter for describing apparel information.
According to still another aspect of the present invention, there is provided a training method of an animation generation model, comprising:
extracting key frames from each animation data sample;
Extracting animation characteristic information of each key frame and obtaining labeling information of the key frame;
generating a training sample set according to the animation characteristic information and the labeling information of each key frame;
training the training sample set to obtain the animation generation model.
Optionally, the extracting the key frame from each animation data sample includes: dividing the animation data sample into a plurality of animation intervals according to each animation data sample, setting interval labels according to each animation interval, and extracting key frames from each animation interval;
the extracting the animation feature information of the key frame and obtaining the labeling information of the key frame respectively aiming at each key frame comprises the following steps:
and respectively aiming at each key frame, combining adjacent frames in the animation interval containing the key frame, determining the animation characteristic information of the key frame, and determining the labeling information of the key frame according to the interval label of the animation interval of the key frame.
Optionally, generating the training sample set according to the animation feature information and the labeling information of each key frame includes:
dividing each key frame into a plurality of groups of key frame sets;
Respectively aiming at each group of key frame sets, obtaining animation characteristic information and labeling information of the group of key frame sets, and obtaining sample characteristic data corresponding to the group of key frame sets;
and generating the training sample set according to the sample characteristic data of each group of key frame sets.
Optionally, the animation feature information of the key frame includes: the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
Optionally, the labeling information includes at least one of the following: label type labeling information, attribute type labeling information and scene type labeling information;
wherein the label class annotation information comprises at least one of the following: strength-class labeling information, speed-class labeling information, proficiency-class labeling information, and aggressiveness-class labeling information;
the attribute type annotation information comprises at least one of the following: feature class label information, character class label information, hobby class label information, occupation class label information and region class label information;
the scene class annotation information comprises at least one of the following: object attribute class annotation information corresponding to the interactive object, media attribute class annotation information for describing the fluid media, and apparel class annotation information for describing the apparel information.
According to still another aspect of the present invention, there is provided a character animation generating apparatus comprising:
the parameter extraction module is suitable for responding to the received role configuration request and extracting configuration parameters contained in the role configuration request;
the acquisition module is suitable for acquiring the animation characteristic information matched with the configuration parameters and generating animation data corresponding to the animation characteristic information;
and the animation module is suitable for driving the role model of the target role through the animation data to obtain the role animation of the target role.
Optionally, the acquiring module is specifically adapted to:
inputting the configuration parameters into an animation generation model, and determining animation characteristic information matched with the configuration parameters through the animation generation model so as to generate animation data corresponding to the animation characteristic information.
Optionally, the character animation is a bone skinning animation, and the bone skinning animation includes bone nodes corresponding to each limb part, and the animation feature information includes at least one of the following:
the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
Optionally, the configuration parameters include at least one dimension parameter, and each dimension parameter includes at least one dimension subparameter; wherein the dimensional parameters include at least one of: a tag dimension parameter, an attribute dimension parameter, and a scene dimension parameter;
and, in the case that the dimension parameters included in the role configuration request are plural, the acquiring module is specifically adapted to:
and respectively determining dimension characteristic information corresponding to each dimension parameter, and carrying out fusion processing on the plurality of dimension characteristic information to obtain the animation characteristic information.
Optionally, the role configuration request further includes: parameter weight values corresponding to each dimension parameter and input by a weight configuration inlet in a configuration interface;
the acquisition module is specifically adapted to: and determining dimension characteristic information corresponding to each dimension parameter according to the parameter weight value of the dimension parameter.
Optionally, the tag dimension parameter includes at least one dimension subparameter of: force quantum parameters, speed subparameters, proficiency subparameters, and aggressiveness subparameters;
the attribute dimension parameters include at least one dimension subparameter of: feature subparameter, character subparameter, hobby subparameter, occupation subparameter, and region subparameter;
The scene dimension parameters include at least one dimension subparameter of: an object attribute sub-parameter corresponding to the interactive object, a media attribute sub-parameter for describing the fluid media, and an apparel attribute sub-parameter for describing apparel information.
According to still another aspect of the present invention, there is provided a training apparatus for animate generating a model, comprising:
a key frame extraction module adapted to extract key frames from each of the animation data samples;
the feature extraction module is suitable for extracting animation feature information of each key frame and acquiring labeling information of the key frame;
the generation module is suitable for generating a training sample set according to the animation characteristic information and the labeling information of each key frame;
and the training module is suitable for training the training sample set to obtain the animation generation model.
Optionally, the key frame extraction module is specifically adapted to: dividing the animation data sample into a plurality of animation intervals according to each animation data sample, setting interval labels according to each animation interval, and extracting key frames from each animation interval;
the feature extraction module is specifically adapted to: and respectively aiming at each key frame, combining adjacent frames in the animation interval containing the key frame, determining the animation characteristic information of the key frame, and determining the labeling information of the key frame according to the interval label of the animation interval of the key frame.
Optionally, the generating module is specifically adapted to:
dividing each key frame into a plurality of groups of key frame sets;
respectively aiming at each group of key frame sets, obtaining animation characteristic information and labeling information of the group of key frame sets, and obtaining sample characteristic data corresponding to the group of key frame sets;
and generating the training sample set according to the sample characteristic data of each group of key frame sets.
Optionally, the animation feature information of the key frame includes: the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
Optionally, the labeling information includes at least one of the following: label type labeling information, attribute type labeling information and scene type labeling information;
wherein the label class annotation information comprises at least one of the following: strength-class labeling information, speed-class labeling information, proficiency-class labeling information, and aggressiveness-class labeling information;
the attribute type annotation information comprises at least one of the following: feature class label information, character class label information, hobby class label information, occupation class label information and region class label information;
The scene class annotation information comprises at least one of the following: object attribute class annotation information corresponding to the interactive object, media attribute class annotation information for describing the fluid media, and apparel class annotation information for describing the apparel information.
According to still another aspect of the present invention, there is provided an electronic apparatus including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute operations corresponding to the character animation generation method, the training method and the device method of the animation generation model and/or the editing method.
According to still another aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the above-described character animation generation, training method and apparatus method for animation generation model, and/or editing method.
According to the character animation generation and training method and device, the animation characteristic information matched with the configuration parameters can be obtained according to the configuration parameters contained in the character configuration request, so that the animation data corresponding to the animation characteristic information is generated, and the character model of the target character is driven through the animation data, so that the character animation of the target character is obtained. Therefore, the method can automatically generate the character animation matched with the configuration parameters based on the association relation between the configuration parameters and the animation characteristic information, and greatly improves the generation efficiency of animation data.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 illustrates a flow chart of a method for generating character animation according to one embodiment of the present invention;
FIG. 2 is a flow chart of a training method for an animation generation model according to a further embodiment of the present invention;
FIG. 3 illustrates a block diagram of an exemplary implementation of the present invention;
fig. 4 is a schematic diagram showing the configuration of a character animation generating device according to the present embodiment;
fig. 5 is a schematic diagram showing the structure of a training device for an animation generation model provided in the present embodiment;
Fig. 6 is a schematic structural diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 illustrates a flow chart of a method for generating character animation according to one embodiment of the present invention. As shown in fig. 1, the method includes:
step S110: and responding to the received role configuration request, and extracting configuration parameters contained in the role configuration request.
Wherein the character configuration request is used for configuring the target character, and can be triggered when a new character needs to be created. The configuration parameters included in the character configuration request are used to describe the characteristics of the target character, and may be specifically various types of configuration parameters. In addition, to facilitate describing the features of the subdivision in each dimension, each dimension parameter may further include at least one dimension sub-parameter.
In an alternative implementation, the configuration parameters include at least one dimension parameter, each dimension parameter describing a feature of a corresponding dimension. For example, the dimensional parameters include at least one of: tag dimension parameters, attribute dimension parameters, and scene dimension parameters. The tag dimension parameter is used for describing the characteristics of the target character in a tag mode, the attribute dimension parameter is used for describing the attribute characteristics of the target character, and the scene dimension parameter is used for describing the related information of the scene where the target character is located.
In specific implementation, the configuration parameters may only include tag dimension parameters, so that character features are conveniently described in a tag manner, and the value of the tag may be: lovely, indifferent, pride and other character characteristics, and also can be hobby characteristics such as sports, leisure and the like.
Step S120: and obtaining the animation characteristic information matched with the configuration parameters, and generating animation data corresponding to the animation characteristic information.
Wherein, a mapping relation between the configuration parameters and the animation feature information is pre-established. For example, the animation feature information corresponding to the configuration parameters of the lovely tag should be the action feature for holding the love. Specifically, the mapping relationship between the configuration parameters and the animation feature information may be established in various manners.
In an alternative implementation, the animation generation model is trained in advance, and accordingly, the configuration parameters are input into the animation generation model, and the animation feature information matched with the configuration parameters is determined through the animation generation model so as to generate the animation data corresponding to the animation feature information. The animation generation model learns in advance through a training mode to obtain the mapping relation between the configuration parameters and the animation characteristic information, so that the animation characteristic information matched with the configuration parameters can be dynamically determined.
In addition, when the plurality of dimension parameters are included in the character configuration request, dimension characteristic information corresponding to each dimension parameter is respectively determined, and fusion processing is performed on the plurality of dimension characteristic information to obtain animation characteristic information. In addition, when the character configuration request further includes parameter weight values corresponding to the respective dimension parameters input through the weight configuration entry in the configuration interface, when the dimension feature information corresponding to the respective dimension parameters is determined, the dimension feature information corresponding to the dimension parameter is determined according to the parameter weight value of the dimension parameter for the respective dimension parameters. Specifically, the dimension parameters and the weight values thereof input by the user can be received through the parameter configuration entry and the weight configuration entry contained in the configuration interface.
In addition, the character animation in this embodiment may be a skeletal skin animation, and the skeletal skin animation includes skeletal nodes corresponding to each limb portion, and the corresponding animation feature information includes at least one of the following: the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts. Wherein, the movement speed of each skeleton node is used for reflecting the movement speed of each limb part of the character. The motion trail of the skeleton node is used for reflecting the motion trail of each limb part of the character. The relative positional relationship between the skeletal nodes corresponding to each limb part comprises: the distance between the left hand and the right hand, the distance between the hands and the face, the distance between the left foot and the right foot, the magnitude of the body bending (specifically, the distance between the back and the foot) and the like. The motion mode of each limb part in the character animation can be described through the animation characteristic information.
When the dimension parameters included in the role configuration request are plural, each dimension parameter may correspond to the same dimension or may correspond to different dimensions. When each dimension parameter corresponds to the same dimension, the character configuration request may include a plurality of tag dimension parameters, for example, a lovely tag and a proud tag, and correspondingly, the character configuration request may respectively obtain first dimension feature information corresponding to a first type tag (such as a lovely tag) and second dimension feature information corresponding to a second type tag (such as a proud tag), so as to perform fusion processing on the first dimension feature information and the second dimension feature information, so as to obtain animation feature information. When the fusion processing is executed, the fusion can be performed according to the weight of each dimension parameter. Correspondingly, under the condition that the character configuration request further comprises parameter weight values of all dimension parameters, when the weight value of the first type of labels is a first weight and the weight value of the second type of labels is a second weight, extracting first local feature information corresponding to the first weight from the first dimension feature information, extracting second local feature information corresponding to the second weight from the second dimension feature information, and correspondingly fusing the first local feature information and the second local feature information to obtain animation feature information. Similarly, when the respective dimension parameters correspond to different dimensions, the fusion process can be performed in a similar manner. In short, through the fusion processing, the animation feature information matched with the composite parameters can be obtained.
In this embodiment, the individual dimension parameters can be flexibly set by those skilled in the art. For example, in one alternative implementation, the tag dimension parameters include at least one of the following dimension subparameters: force quantum parameters, speed subparameters, proficiency subparameters, and aggressiveness subparameters.
The attribute dimension parameters include at least one of the following dimension subparameters: feature subparameters, personality subparameters, hobby subparameters, occupation subparameters, weapon type subparameters, and territory subparameters (for identifying the body area, the activity area, etc.).
The scene dimension parameters include at least one dimension subparameter of: an object attribute sub-parameter corresponding to the interactive object, a media attribute sub-parameter for describing the fluid media, and an apparel attribute sub-parameter for describing apparel information. In addition, the scene dimension parameters may also include dimension sub-parameters corresponding to the following information: gravity, wind force, ground friction, ground angle, ground material, buoyancy resistance of water, distance between an interactive object and a character, weather environment and the like. In summary, various scene parameters in the application process can be used as scene dimension parameters, including but not limited to scene objects and parameters thereof, and other influencing parameters such as real-time environment parameters of weather systems and the like.
Step S130: and driving the character model of the target character through the animation data to obtain the character animation of the target character.
The character model of the target character can be a skinned skeleton model, and correspondingly, the animation data is also skeleton animation data, so that the skinned skeleton model of the target character can be directly driven by the skeleton animation data to perform corresponding actions, and the character animation of the target character can be obtained.
Therefore, the method can automatically generate the character animation matched with the configuration parameters based on the association relation between the configuration parameters and the animation characteristic information, and greatly improves the generation efficiency of animation data. And by reasonably setting each configuration parameter and parameter weight value contained in the character configuration request, various types of character animations can be flexibly fused, and the setting requirement of personalized animations is met.
Fig. 2 is a flowchart of a training method of an animation generation model according to another embodiment of the present invention. This embodiment is used to train the animation generation model used in the previous embodiment. As shown in fig. 2, the method includes:
step S210: key frames are extracted from each animation data sample.
The animation data sample may be animation data of each character that has been created by an animator. Since the action characteristics of different roles are different, the animation data of each existing role is collected, and a large number of animation data samples can be obtained. Wherein, the animation data of each existing character is also called key frame data, and specifically comprises data corresponding to a plurality of key frames. Accordingly, in order to accurately label each sample, it is necessary to extract key frames from each animation data sample, respectively.
Step S220: and extracting animation characteristic information of each key frame according to each key frame, and acquiring labeling information of the key frame.
Wherein, analyzing is performed for each key frame to extract the animation feature information contained therein. The animation feature information of the key frame includes: the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, the relative position relationship among the skeleton nodes corresponding to the limb parts, and the like. In addition, the animation feature information further includes: rotation information, displacement information and scaling information of bone nodes corresponding to each limb part.
In addition, the labeling information of the key frames is used for labeling the types of the key frames, and specifically comprises at least one of the following: tag class label information, attribute class label information (also called character attribute labels), and scene class label information. Wherein the label class annotation information comprises at least one of the following: strength-class annotation information, speed-class annotation information, proficiency-class annotation information, and aggressiveness-class annotation information. The attribute class annotation information includes at least one of: feature class label information, character class label information, hobby class label information, occupation class label information, and region class label information. The scene class annotation information comprises at least one of the following: object attribute class annotation information corresponding to the interactive object, media attribute class annotation information for describing the fluid media, and apparel class annotation information for describing the apparel information. Wherein the tag class label information may also be referred to as an animation tag, and the tag class label information has score information, for example, each tag class label information has a score range of 1-10 points to identify the degree thereof, and the tag class label information is usually a necessary tag.
The motion speed of bones can reflect the speed sense of the animation, the recoil after motion pause can reflect the strength sense of the animation and the like, and in addition, the motion characteristics of the character can be accurately reflected through the distance between the collection hands and the face, the distance between the two hands and the distance between the two feet, the bending amplitude of the body and the like, so that in a word, the association relation between the labeling information and the animation characteristics can be accurately established through the omnibearing collection of the animation characteristics.
Step S230: and generating a training sample set according to the animation characteristic information and the labeling information of each key frame.
Specifically, each key frame is divided into a plurality of groups of key frame sets; and respectively aiming at each group of key frame sets, acquiring animation characteristic information and labeling information of the group of key frame sets, obtaining sample characteristic data corresponding to the group of key frame sets, and generating a training sample set according to the sample characteristic data of each group of key frame sets.
Wherein, since a group of actions will last for a period of time, so that a plurality of key frames need to be presented in turn, the key frame set can be divided according to the connection relationship between the actions. Accordingly, a set of keyframes is made up of at least two temporally adjacent keyframes that collectively represent a set of actions, such as squat, raise hands, etc. Correspondingly, respectively aiming at each group of key frame sets, obtaining the animation feature information and the labeling information of the group of key frame sets, generating the animation feature vector corresponding to the group of key frame sets according to the animation feature information of the group of key frame sets, and labeling the animation feature vector according to the labeling information of the group of key frame sets, so as to obtain the sample feature data corresponding to the group of key frame sets. It follows that the training sample set contains a plurality of sample feature data. In addition, by dividing the keyframe set, a plurality of keyframes with association relations are conveniently determined, so that action characteristics are accurately reflected based on time sequence characteristics or skeleton displacement/rotation speed characteristics among the keyframes with association relations.
In the implementation, the animation data sample may be divided into a plurality of animation intervals in advance, so that the key frames are extracted according to the animation intervals, and the labeling information is set for each animation interval. Accordingly, when extracting the key frames from each animation data sample, the animation data sample is divided into a plurality of animation sections for each animation data sample, and section labels are set for each animation section, so as to extract the key frames from each animation section. And when the animation feature information of the key frame is extracted for each key frame and the labeling information of the key frame is obtained, the animation feature information of the key frame is determined by combining adjacent frames in the animation interval containing the key frame for each key frame, and the labeling information of the key frame is determined according to the interval label of the animation interval of the key frame. The animation interval can be divided according to the connection relation between the actions or the action types. In addition, the above-mentioned keyframe set may also be based on animation interval division, that is: a plurality of key frames belonging to the same animation interval are determined as a group of key frame sets.
Step S240: training is carried out aiming at the training sample set, and an animation generation model is obtained.
Wherein training may be performed through a variety of machine learning algorithms. For example, training may be performed by a deep learning algorithm or a reinforcement learning algorithm. The deep learning is used for simulating optimal similar information according to data, and a large amount of collected data is transmitted to the AI to be calculated by means of artificial intelligence machine learning, deep learning, reinforcement training and the like, so that a group of animation data schemes are obtained.
In short, the invention is not limited to a specific training algorithm, as long as the corresponding relation between the animation features and the labeling information can be learned. Accordingly, the finally generated animation generation model can output animation data matched with the configuration parameters based on the configuration parameters corresponding to the labeling information input by the user.
In summary, the mode in this embodiment can automatically learn to obtain the corresponding relationship between various labeling information and animation feature data through a machine learning mode, so that animation data can be automatically output. And when the configuration parameters input by the user are composite configuration parameters comprising a plurality of dimensions, the animation generation model can automatically output animation data matched with the composite configuration parameters, namely: fitting processing can be performed on each animation characteristic data corresponding to the composite configuration parameters, so that fitted animation data are obtained, and the fitted animation data are matched with the composite configuration parameters. By the method, the animation data of various custom types can be automatically fitted by utilizing the animation data of the existing roles in the database, so that the personalized animation data can be flexibly and efficiently generated.
For ease of understanding, the implementation of the present invention will be described in detail below by taking a specific example as an example:
first, animation data for training is collected. The animation data may be animation data of a character that has been created. And labeling the animation data according to the character characteristics of the character, thereby obtaining a training sample set according to the labeling result.
Then, training the training sample set by a deep learning algorithm or a reinforcement training method. When a deep learning algorithm is adopted, data of n groups of core parameters are required to be summarized and arranged from a sample database, so that according to configuration parameters input by a user, the gesture matching degree, speed, movement trend and the like of each sample resource in the database are queried, and animation data are automatically selected from the sample database to synthesize new animation playing. The training process of reinforcement training is similar to deep learning, and training can be performed by adding a reward mechanism, so that a set of calculation formulas are obtained, and the animation data matched with the configuration parameters input by the user can be determined through the set of calculation formulas. Wherein the set of calculation formulas may be a multiple-element equation, and parameters in the equation include, but are not limited to: picture weight value, camera FOV (angle of view) value, camera position, camera viewpoint position, and parameters for determining whether the subject is within the camera field of view. Correspondingly, the most suitable animation gesture is obtained through formula calculation.
Fig. 3 shows a block diagram of the execution apparatus of this example, and as shown in fig. 3, various types of animation data such as walking, running, waving, swimming, diving, jumping, and the like are stored in the animation database. Correspondingly, the data after being processed (namely, a training sample set) is obtained through a data preprocessing module, the training sample set is trained through a training module, whether the training result is accurate or not is verified through a verification module, and finally, proper animation data are output through an animation generation module. The animation generation module has a fitting function, and can perform fitting processing on animation feature data corresponding to a plurality of configuration parameters to obtain personalized animation data.
The animation data stored as a sample in the animation database of the above example is typically bone animation data carrying bone information. In a modification of the above example, conventional animation data that does not carry bone information may be further stored in the animation database as training samples. Accordingly, skeletal animation data corresponding to the conventional animation data is generated through a preset algorithm at a later stage.
In addition, since the conventional animation data is generally 2D animation data, more abundant additional information can be extracted therefrom, including: media attribute class annotation information describing the fluid media, such as water flow information in the scene information, including water flow, water pressure, buoyancy, resistance, etc. The extracted animation feature data further includes, in addition to the motion features of the respective limb portions: expression characteristic data and clothing characteristic data. Therefore, by means of the advantage that more additional information can be extracted from the 2D animation data, the association relationship between scene information such as water flow information and the expression or clothes of the character can be further learned, and further the character expression and clothes in the finally generated animation data can be dynamically adjusted along with the scene change. Or, the association relation between the scene information such as the water flow information and the clothes information and the expression of the character can be further learned, so that the character expression in the finally generated animation data can be dynamically adjusted along with the water flow change or the clothes change in the scene.
In an alternative implementation, the corresponding torso animation may be calculated from data in the animation database by acting on the torso bones based on water flow information (e.g., water flow, water pressure, buoyancy, resistance, etc.). In yet another alternative, the corresponding expression animation may be calculated based on water flow information (e.g., water flow, water pressure, buoyancy, resistance, etc.), clothing information (e.g., texture parameters, mass, elasticity, etc.), and gesture data in an animation database. When the data in the animation database is generated in a gesture simulation mode, the character actions such as walking, running, squatting and the like are identified by adopting continuous actions through video frame-by-frame analysis.
In addition, the joint positions of hands, feet, trunk, head and the like of the player user can be identified through the motion capture device, and the motion data can be directly calculated by a computer through means of dimension measurement, positioning of objects in physical space, azimuth measurement and the like, so that a large amount of motion capture data can be converted into training samples in an animation database. For example, for the torso portion, the water flow, water pressure, buoyancy, and resistance of the water within the game world are defined, and the mass and rotation angle limits of the individual skeletal joints of the game character are defined. When a player receives physical force, the stress direction and the stress of each joint are calculated, so that a Newton mechanical model is simulated, and data in a gesture database are calculated and compared according to input variables such as water flow, water pressure, buoyancy, resistance and the like, so that animation effects of trunk bones under different conditions are simulated. For another example, for the expression part, the water flow, water pressure, buoyancy and resistance of water in the game world are defined, and the texture parameters, quality and elasticity of the clothing of the character are defined. When a player receives physical force, the stress direction and the stress of the clothing of the character are calculated, so that a Newton mechanical model is simulated, and the effect of simulating the expression of the character under different conditions can be realized by calculating and comparing the data in the animation database according to the input variables such as water flow, water pressure, buoyancy, resistance and the like.
Fig. 4 is a schematic structural view showing a character animation generating apparatus according to still another embodiment of the present invention, the generating apparatus comprising:
a parameter extraction module 41 adapted to extract configuration parameters contained in a role configuration request in response to the received role configuration request;
an acquisition module 42 adapted to acquire animation feature information matched with the configuration parameters, and generate animation data corresponding to the animation feature information;
and the animation module 43 is suitable for driving the character model of the target character through the animation data to obtain the character animation of the target character.
Optionally, the acquiring module is specifically adapted to:
inputting the configuration parameters into an animation generation model, and determining animation characteristic information matched with the configuration parameters through the animation generation model so as to generate animation data corresponding to the animation characteristic information.
Optionally, the character animation is a bone skinning animation, and the bone skinning animation includes bone nodes corresponding to each limb part, and the animation feature information includes at least one of the following:
the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
Optionally, the configuration parameters include at least one dimension parameter, and each dimension parameter includes at least one dimension subparameter; wherein the dimensional parameters include at least one of: a tag dimension parameter, an attribute dimension parameter, and a scene dimension parameter;
and, in the case that the dimension parameters included in the role configuration request are plural, the acquiring module is specifically adapted to:
and respectively determining dimension characteristic information corresponding to each dimension parameter, and carrying out fusion processing on the plurality of dimension characteristic information to obtain the animation characteristic information.
Optionally, the role configuration request further includes: parameter weight values corresponding to each dimension parameter and input by a weight configuration inlet in a configuration interface;
the acquisition module is specifically adapted to: and determining dimension characteristic information corresponding to each dimension parameter according to the parameter weight value of the dimension parameter.
Optionally, the tag dimension parameter includes at least one dimension subparameter of: force quantum parameters, speed subparameters, proficiency subparameters, and aggressiveness subparameters;
the attribute dimension parameters include at least one dimension subparameter of: feature subparameter, character subparameter, hobby subparameter, occupation subparameter, and region subparameter;
The scene dimension parameters include at least one dimension subparameter of: an object attribute sub-parameter corresponding to the interactive object, a media attribute sub-parameter for describing the fluid media, and an apparel attribute sub-parameter for describing apparel information.
Fig. 5 is a schematic structural diagram of a training device for generating an animation model according to another embodiment of the present invention, where the training device includes:
a key frame extraction module 51 adapted to extract key frames from respective animation data samples;
the feature extraction module 52 is adapted to extract animation feature information of the key frame and obtain labeling information of the key frame for each key frame respectively;
the generating module 53 is adapted to generate a training sample set according to the animation feature information and the labeling information of each key frame;
the training module 54 is adapted to train on the training sample set to obtain the animation generation model.
Optionally, the key frame extraction module is specifically adapted to: dividing the animation data sample into a plurality of animation intervals according to each animation data sample, setting interval labels according to each animation interval, and extracting key frames from each animation interval;
The feature extraction module is specifically adapted to: and respectively aiming at each key frame, combining adjacent frames in the animation interval containing the key frame, determining the animation characteristic information of the key frame, and determining the labeling information of the key frame according to the interval label of the animation interval of the key frame.
Optionally, the generating module is specifically adapted to:
dividing each key frame into a plurality of groups of key frame sets;
respectively aiming at each group of key frame sets, obtaining animation characteristic information and labeling information of the group of key frame sets, and obtaining sample characteristic data corresponding to the group of key frame sets;
and generating the training sample set according to the sample characteristic data of each group of key frame sets.
Optionally, the animation feature information of the key frame includes: the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
Optionally, the labeling information includes at least one of the following: label type labeling information, attribute type labeling information and scene type labeling information;
wherein the label class annotation information comprises at least one of the following: strength-class labeling information, speed-class labeling information, proficiency-class labeling information, and aggressiveness-class labeling information;
The attribute type annotation information comprises at least one of the following: feature class label information, character class label information, hobby class label information, occupation class label information and region class label information;
the scene class annotation information comprises at least one of the following: object attribute class annotation information corresponding to the interactive object, media attribute class annotation information for describing the fluid media, and apparel class annotation information for describing the apparel information.
The specific structure and working principle of each module may refer to the description of the corresponding parts of the method embodiment, and are not repeated here.
Still another embodiment of the present application provides a non-volatile computer storage medium storing at least one executable instruction, where the computer executable instruction may perform the method and apparatus for generating a character animation and training an animation generation model in any of the above method embodiments. The executable instructions may be particularly useful for causing a processor to perform the operations corresponding to the method embodiments described above.
Fig. 6 shows a schematic structural diagram of an electronic device according to another embodiment of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in fig. 6, the electronic device may include: a processor 502, a communication interface (Communications Interface) 506, a memory 504, and a communication bus 508.
Wherein:
processor 502, communication interface 506, and memory 504 communicate with each other via communication bus 508.
A communication interface 506 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and specifically may execute the steps related to the above-described character animation generation, training method for animation generation model, and apparatus method embodiments.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 504 for storing program 510. The memory 504 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to cause the processor 502 to perform the respective operations corresponding to the above-described method embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (15)

1. A method of generating character animation, comprising:
responding to a received role configuration request, and extracting configuration parameters contained in the role configuration request;
obtaining animation feature information matched with the configuration parameters, and generating animation data corresponding to the animation feature information;
and driving the role model of the target role through the animation data to obtain the role animation of the target role.
2. The method of claim 1, wherein the obtaining animation feature information that matches the configuration parameters, generating animation data corresponding to the animation feature information comprises:
inputting the configuration parameters into an animation generation model, and determining animation characteristic information matched with the configuration parameters through the animation generation model so as to generate animation data corresponding to the animation characteristic information.
3. The method according to claim 1 or 2, wherein the character animation is a bone skinning animation, and bone nodes corresponding to respective limb parts are included in the bone skinning animation, and the animation feature information includes at least one of the following:
the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
4. A method according to any one of claims 1-3, wherein the configuration parameters comprise at least one dimension parameter, and each dimension parameter comprises at least one dimension sub-parameter; wherein the dimensional parameters include at least one of: a tag dimension parameter, an attribute dimension parameter, and a scene dimension parameter;
and, in the case that the dimension parameters included in the character configuration request are plural, the acquiring the animation feature information matched with the configuration parameters includes:
and respectively determining dimension characteristic information corresponding to each dimension parameter, and carrying out fusion processing on the plurality of dimension characteristic information to obtain the animation characteristic information.
5. The method of claim 4, wherein the role configuration request further comprises: parameter weight values corresponding to each dimension parameter and input by a weight configuration inlet in a configuration interface;
the determining dimension characteristic information corresponding to each dimension parameter includes: and determining dimension characteristic information corresponding to each dimension parameter according to the parameter weight value of the dimension parameter.
6. The method of claim 4 or 5, wherein the tag dimension parameter comprises at least one dimension subparameter of: force quantum parameters, speed subparameters, proficiency subparameters, and aggressiveness subparameters;
The attribute dimension parameters include at least one dimension subparameter of: feature subparameter, character subparameter, hobby subparameter, occupation subparameter, and region subparameter;
the scene dimension parameters include at least one dimension subparameter of: an object attribute sub-parameter corresponding to the interactive object, a media attribute sub-parameter for describing the fluid media, and an apparel attribute sub-parameter for describing apparel information.
7. A training method of an animation generation model, comprising:
extracting key frames from each animation data sample;
extracting animation characteristic information of each key frame and obtaining labeling information of the key frame;
generating a training sample set according to the animation characteristic information and the labeling information of each key frame;
training the training sample set to obtain the animation generation model.
8. The method of claim 7, wherein the extracting key frames from each animation data sample comprises: dividing the animation data sample into a plurality of animation intervals according to each animation data sample, setting interval labels according to each animation interval, and extracting key frames from each animation interval;
The extracting the animation feature information of the key frame and obtaining the labeling information of the key frame respectively aiming at each key frame comprises the following steps:
and respectively aiming at each key frame, combining adjacent frames in the animation interval containing the key frame, determining the animation characteristic information of the key frame, and determining the labeling information of the key frame according to the interval label of the animation interval of the key frame.
9. The method of claim 7 or 8, wherein generating the training sample set according to the animation feature information and the annotation information of each key frame comprises:
dividing each key frame into a plurality of groups of key frame sets;
respectively aiming at each group of key frame sets, obtaining animation characteristic information and labeling information of the group of key frame sets, and obtaining sample characteristic data corresponding to the group of key frame sets;
and generating the training sample set according to the sample characteristic data of each group of key frame sets.
10. The method of any of claims 7-9, wherein the animation feature information of the keyframe comprises: the movement speed and/or the movement track of the skeleton nodes corresponding to the limb parts, and the relative position relationship among the skeleton nodes corresponding to the limb parts.
11. The method of any of claims 7-10, wherein the annotation information comprises at least one of: label type labeling information, attribute type labeling information and scene type labeling information;
wherein the label class annotation information comprises at least one of the following: strength-class labeling information, speed-class labeling information, proficiency-class labeling information, and aggressiveness-class labeling information;
the attribute type annotation information comprises at least one of the following: feature class label information, character class label information, hobby class label information, occupation class label information and region class label information;
the scene class annotation information comprises at least one of the following: object attribute class annotation information corresponding to the interactive object, media attribute class annotation information for describing the fluid media, and apparel class annotation information for describing the apparel information.
12. A character animation generating apparatus comprising:
the parameter extraction module is suitable for responding to the received role configuration request and extracting configuration parameters contained in the role configuration request;
the acquisition module is suitable for acquiring the animation characteristic information matched with the configuration parameters and generating animation data corresponding to the animation characteristic information;
And the animation module is suitable for driving the role model of the target role through the animation data to obtain the role animation of the target role.
13. A training apparatus for animate generating a model, comprising:
a key frame extraction module adapted to extract key frames from each of the animation data samples;
the feature extraction module is suitable for extracting animation feature information of each key frame and acquiring labeling information of the key frame;
the generation module is suitable for generating a training sample set according to the animation characteristic information and the labeling information of each key frame;
and the training module is suitable for training the training sample set to obtain the animation generation model.
14. An electronic device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform operations corresponding to the method and apparatus for generating a character animation and training an animation generation model according to any one of claims 1 to 6 and/or the method and apparatus for generating a character animation and training an animation generation model according to any one of claims 7 to 11.
15. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the character animation generation, the training method and apparatus for an animation generation model as claimed in any one of claims 1 to 6 and/or the character animation generation, the training method and apparatus for an animation generation model as claimed in any one of claims 7 to 11.
CN202111661942.0A 2021-12-30 2021-12-30 Character animation generation method, animation generation model training method and device Pending CN116433808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111661942.0A CN116433808A (en) 2021-12-30 2021-12-30 Character animation generation method, animation generation model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111661942.0A CN116433808A (en) 2021-12-30 2021-12-30 Character animation generation method, animation generation model training method and device

Publications (1)

Publication Number Publication Date
CN116433808A true CN116433808A (en) 2023-07-14

Family

ID=87091253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111661942.0A Pending CN116433808A (en) 2021-12-30 2021-12-30 Character animation generation method, animation generation model training method and device

Country Status (1)

Country Link
CN (1) CN116433808A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215930A (en) * 2020-10-19 2021-01-12 珠海金山网络游戏科技有限公司 Data processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215930A (en) * 2020-10-19 2021-01-12 珠海金山网络游戏科技有限公司 Data processing method and device

Similar Documents

Publication Publication Date Title
US11113860B2 (en) Particle-based inverse kinematic rendering system
US10679044B2 (en) Human action data set generation in a machine learning system
US6552729B1 (en) Automatic generation of animation of synthetic characters
US11373354B2 (en) Techniques for rendering three-dimensional animated graphics from video
JP6082101B2 (en) Body motion scoring device, dance scoring device, karaoke device, and game device
CN110827383B (en) Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment
US11244489B2 (en) Method and system for determining identifiers for tagging video frames
CN107423398A (en) Exchange method, device, storage medium and computer equipment
Ludl et al. Enhancing data-driven algorithms for human pose estimation and action recognition through simulation
US11562523B1 (en) Enhanced animation generation based on motion matching using local bone phases
CN112270734A (en) Animation generation method, readable storage medium and electronic device
Ribet et al. Survey on style in 3d human body motion: Taxonomy, data, recognition and its applications
Guo et al. Action2video: Generating videos of human 3d actions
US20230177755A1 (en) Predicting facial expressions using character motion states
CN116433808A (en) Character animation generation method, animation generation model training method and device
Dong et al. Skeleton-based human motion prediction with privileged supervision
KR102171319B1 (en) Appratus for writing motion-script, appratus for self-learning montion and method for using the same
US8933940B2 (en) Method and system for creating animation with contextual rigging
CN115797517B (en) Data processing method, device, equipment and medium of virtual model
Etemad et al. Modeling and transformation of 3D human motion
CN116700471A (en) Method and system for enhancing user experience of virtual reality system
Malek-Podjaski et al. Adversarial Attention for Human Motion Synthesis
Cai et al. Immersive interactive virtual fish swarm simulation based on infrared sensors
US20240135618A1 (en) Generating artificial agents for realistic motion simulation using broadcast videos
KR102342760B1 (en) The golf image learning apparatus based on the artificial intelligence, and the method thereof and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination