WO2021155686A1 - 一种制作动画的方法、装置、计算设备及存储介质 - Google Patents

一种制作动画的方法、装置、计算设备及存储介质 Download PDF

Info

Publication number
WO2021155686A1
WO2021155686A1 PCT/CN2020/125924 CN2020125924W WO2021155686A1 WO 2021155686 A1 WO2021155686 A1 WO 2021155686A1 CN 2020125924 W CN2020125924 W CN 2020125924W WO 2021155686 A1 WO2021155686 A1 WO 2021155686A1
Authority
WO
WIPO (PCT)
Prior art keywords
bone
shape model
target
reference bone
posture
Prior art date
Application number
PCT/CN2020/125924
Other languages
English (en)
French (fr)
Inventor
刘杰
李静翔
张华�
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2022519774A priority Critical patent/JP7394977B2/ja
Priority to KR1020227004104A priority patent/KR102637513B1/ko
Publication of WO2021155686A1 publication Critical patent/WO2021155686A1/zh
Priority to US17/680,921 priority patent/US11823315B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20044Skeletonization; Medial axis transform

Definitions

  • This application relates to computer technology, and provides a method, device, computing device, and storage medium for making animation.
  • Animated characters are widely used in games and movies.
  • the shape model of animated characters is determined mainly through the skeletal posture of the animated characters.
  • the shape model is analogous to the human skin, and the bone pose is analogous to the human bone, as shown in Figure 1.
  • Shown is a schematic diagram of the arm posture of an animated character.
  • the position of the bones should correspond to the shape model.
  • the elbow when the elbow is bent, the upper arm protrudes to produce the feeling of muscle protrusion.
  • the connecting part of the upper and lower arms is compressed to simulate the squeezing of real human muscles.
  • Figure 2 when the elbow is bent Schematic diagram of the arm posture of an animated character. To make the animated character look lifelike, the appearance and posture of the animated character needs to be deformed correspondingly with the movement of the bones of the whole body.
  • a method, device, computing device, and storage medium for making animation are provided.
  • this application provides a method for making animation, which is executed by a computing device, and the method includes:
  • the target plug-in node In response to the posture selection instruction for the non-reference bone posture, the target plug-in node is called, and according to the posture selection instruction, the non-reference bone shape model corresponding to the non-reference bone shape model set is obtained from the non-reference bone shape model set, where the non-reference bone shape model set Include the non-reference bone shape model corresponding to each non-reference bone posture;
  • the target bone pose is determined, and based on the obtained non-reference bone shape model of the non-reference bone pose, the target bone shape model of the target bone pose is generated.
  • the present application provides a device for making an animation, the device includes:
  • the calling unit is used to call the target plug-in node in response to the posture selection instruction for the non-reference bone posture, and obtain the non-reference bone shape model corresponding to the non-reference bone posture from the non-reference bone shape model set according to the posture selection instruction.
  • the reference bone shape model set includes the non-reference bone shape model corresponding to each non-reference bone pose;
  • the generating unit is used to input instructions according to the parameters of the target bone pose parameters of the animated character to determine the target bone pose, and generate the target bone shape model of the target bone pose based on the obtained non-reference bone shape model of the non-reference bone pose.
  • an embodiment of the present application provides a computing device for making animation, including a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes the steps of the method for making animation.
  • the embodiments of the present application provide one or more non-volatile storage media storing computer-readable instructions.
  • the processors execute the foregoing Steps of the method of making animation.
  • the method, device and storage medium for making animation of the present application call the target plug-in node according to the posture selection instruction of the non-reference bone posture, and obtain the non-reference bone posture determined according to the posture selection instruction from the non-reference bone shape model set
  • the corresponding non-reference bone shape model, and based on the obtained non-reference bone shape model of the non-reference bone pose, the target bone shape model of the target bone pose is generated, where the target bone pose is a parameter input command based on the target bone pose parameters of the animated character , Sure; select the non-reference bone pose of the bone shape model used to make the animation from the plug-in node to improve the flexibility of the animation character production process, and the non-reference bone pose that has not been selected will not be deleted, and there is no need to remake , Reduce the amount of calculation and improve execution efficiency.
  • Figure 1 is a schematic diagram of the arm posture of an animated character
  • Figure 2 is a schematic diagram of the arm posture of the animated character when the elbow is bent
  • FIG. 3 is a schematic diagram of generating a shape model corresponding to a non-reference bone posture in the related technology
  • FIG. 4 is a schematic diagram of an application scenario for making animation provided by an embodiment of the application
  • FIG. 5 is a flowchart of a method for making animation provided by an embodiment of the application.
  • Fig. 6 is a display interface for triggering a gesture selection instruction provided by an embodiment of the application
  • FIG. 7 is a schematic diagram of a trigger gesture selection instruction provided by an embodiment of the application.
  • FIG. 8 is another display interface for triggering a gesture selection instruction provided by an embodiment of the application.
  • FIG. 9 is a display interface diagram for determining a target radial function provided by an embodiment of this application.
  • FIG. 10 is a flow chart of implementing animation production through plug-in nodes provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of a target bone shape model generated based on a non-reference bone shape model according to an embodiment of the application.
  • FIG. 12 is a structural diagram of a device for making animation provided by an embodiment of the application.
  • FIG. 13 is a structural diagram of a computing device for making animation provided by an embodiment of the application.
  • Animation production software a general term for a type of software used to produce animated characters.
  • Animation production software includes Maya software, Blender software, Houdini software, etc.
  • Maya software includes modeling, animation, rendering and special effects. It is a three-dimensional modeling animation software;
  • Blender software is an open source cross-platform all-round three-dimensional animation production software, including modeling, animation, material, rendering, Animation production solutions such as audio processing and video editing;
  • Houdini software is a three-dimensional computer graphics software.
  • Animated character a virtual character drawn by 3D (Threee-Dimensional) game engine or animation production software, and drawn with the aid of 3D graphics modeling and rendering technology.
  • the virtual character may be a virtual object with a skeletal posture and an appearance posture, such as a virtual character and a virtual animal.
  • Skeletal animation Each animated character contains at least two kinds of data: bone posture and appearance posture.
  • the animation made by changing the appearance posture by the skeletal posture is called skeletal animation.
  • Animated character shape Skinning The shape of an animated character will change with the change of the bone posture. Therefore, it is necessary to define the bone and define that the bone can drive the vertices on the shape model.
  • the process of Skinning is to specify the driving relationship between all the bones and the vertices of the shape model.
  • the vertices of the shape model also change, that is, the shape model of the animated character changes.
  • the shape model As shown in Figure 1, assuming that the bone posture and shape model shown in Figure 1 are the initial posture, when the bone posture changes from the posture of Figure 1 to the posture of Figure 2, the shape model also changes at the same time, that is, from the shape of Figure 1
  • the model changes to the shape model shown in Figure 2.
  • the shape model changes because the bones drive the apex of the shape model, which changes the shape model.
  • RBF Random Basis Function
  • PoseDriver method A scheme that uses the RBF algorithm to read the skeletal posture of the animated character to obtain the shape of the new character.
  • BlendShape is a data storage form that records the shape model data of animated characters.
  • Modeler The producer who determines the driving relationship between the skeletal posture and the shape model.
  • the modeler establishes the correspondence between the skeletal posture of the animated character and the shape model of the animated character, and when the skeletal posture is changed, the shape model of the animated character changes accordingly through the Skinning process.
  • One method is to use the PoseDriver method to perform surface deformation on the shape model of the animated character to obtain a new shape model.
  • an example of making an animated character through animation making Maya software is to use the PoseDriver method to perform surface deformation on the shape model of the animated character to obtain a new shape model.
  • Surface deformation refers to the process in which the shape model changes with the posture of the bones.
  • the driving relationship specified by Skinning is used to change the shape model, but the effect of the change is not good.
  • PoseDriver In order to achieve a better visual effect, and to achieve the effect of customization by the modeler based on experience, at this time, on the basis of Skinning, use PoseDriver to make changes to obtain a new shape model.
  • the modeler is required to pre-define the shape models of the bones in different poses.
  • the upper arm bone as an example, usually define five basic bone postures: upper arm level, upper arm forward, upper arm upward, upper arm downward, and upper arm backward.
  • the modeler When making the shape model corresponding to the non-reference bone pose, the modeler uses the pre-defined five reference bone poses and the corresponding reference bone shape model, and completes the production in the animation production software. In this process, the non-reference bone pose and the reference bone shape model corresponding to the reference bone pose are used, and the skinning process is used to determine the non-reference bone shape model corresponding to the non-reference bone pose. At this time, if the modeler believes that the current non-reference bone shape model cannot meet the requirements, the modeler will modify the non-reference bone shape model corresponding to the non-reference bone posture according to the requirements to obtain a non-reference bone shape model that meets the requirements.
  • the non-reference bone shape model After obtaining the non-reference bone shape model that meets the requirements, after a reverse coordinate space conversion, the non-reference bone shape model is transformed to the coordinate system before the Skinning process. This process can be called InvertShape calculation, and then it will be calculated by InvertShape The latter model and the shape model corresponding to the basic bone pose are unified under one coordinate system.
  • FIG. 3 it is a schematic diagram of generating a non-reference bone shape model corresponding to a non-reference bone posture.
  • non-reference bone poses are generally called specific bone poses to distinguish them from reference bone poses.
  • the shape models corresponding to non-reference bone poses are also called specific bone shape models.
  • the shape model corresponding to any new bone pose can be determined by using the generated non-reference bone pose and the corresponding non-reference bone shape model.
  • Non-reference bone poses are generally defined by some parameters, such as action command parameters, bone bending angle parameters, etc. These parameters are collectively referred to as bone pose parameters.
  • the target bone pose that needs to be generated can be determined.
  • the upper arm bones of the animated character first move to generate a new target bone pose; further, according to the non-reference bone shape model corresponding to the non-reference bone pose, a new target bone is generated The shape model corresponding to the posture.
  • embodiments of the present application provide methods, devices, computing devices, and storage media for making animations.
  • the embodiments of the present application involve artificial intelligence (AI) and machine learning technology, and are designed based on computer vision (CV) technology and machine learning (ML) in artificial intelligence.
  • AI artificial intelligence
  • CV computer vision
  • ML machine learning
  • Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology mainly includes several major directions such as computer vision technology, speech processing technology, and machine learning/deep learning.
  • artificial intelligence With the research and progress of artificial intelligence technology, artificial intelligence has been researched and applied in many fields, such as common smart home, image retrieval, video surveillance, smart speakers, smart marketing, unmanned driving, autonomous driving, drones, and robots It is believed that with the development of technology, artificial intelligence will be applied in more fields and exert more and more important value.
  • Computer vision technology is an important application of artificial intelligence. It studies related theories and technologies and attempts to establish an artificial intelligence system that can obtain information from pictures, videos or multi-dimensional data to replace human visual interpretation.
  • Typical computer vision technology usually includes image processing and video analysis.
  • the method for making animation provided by the embodiment of the present application involves image processing.
  • Machine learning is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other subjects. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance.
  • Machine learning is the core of artificial intelligence, the fundamental way to make computers intelligent, and its applications cover all fields of artificial intelligence.
  • Machine learning and deep learning usually include artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning and other technologies.
  • a scheme based on the RBF algorithm and the PoseDriver method is used to produce the skeleton animation of the animated character.
  • the plug-in technology in order to retain the non-reference bone shape models that have not been selected, the plug-in technology is used to store all non-reference bone shape models outside the animation production software to form a non-reference bone shape model set.
  • the plug-in is used to call the selected non-reference bone pose shape model from the non-reference bone shape model collection to make the shape model of the target bone pose, so there is no need to delete the non-reference bone pose that is not selected.
  • the shape model in order to retain the non-reference bone shape models that have not been selected.
  • the animation production method provided by this application, each time an animation is made, in response to the posture selection instruction for the non-reference bone posture, the target plug-in node is called, and the target plug-in node is called according to the posture selection instruction of the non-reference bone posture.
  • the shape model set obtain the selected non-reference bone shape model, where the non-reference bone shape model set contains the non-reference bone shape model corresponding to each non-reference bone shape; after the non-reference bone shape model is obtained, based on the obtained
  • the non-reference bone shape model generates the target bone shape model of the target bone pose, wherein the target shape model of the target bone pose is determined according to the parameter input command for the target bone pose parameters of the animated character.
  • the target plug-in node is called, and the non-reference bone shape model corresponding to the non-reference bone shape model set is obtained from the non-reference bone shape model set according to the pose selection instruction; and the target bone shape is generated based on the obtained non-reference bone shape model
  • the target bone shape model of the target; the non-reference bone shape model used to make the animated character bone shape model can be selected according to actual needs, which improves the flexibility in the animation production process, and can generate a more natural shape according to the selected non-reference bone shape model Model, and there is no need to use the unselected non-reference bone shape model to generate the target bone shape model, which reduces the amount of calculation and improves the execution efficiency.
  • this application uses a plug-in to store a collection of non-reference bone shape models.
  • at least one non-reference bone shape model is obtained from the non-reference bone shape model collection for use, and the unused non-reference bone shape model is still stored in The non-reference bone shape model collection has not been deleted, so there is no need to recreate the non-reference bone shape model, which reduces the workload and improves execution efficiency.
  • Shape model and for any non-reference bone posture, determine the vector distance between the non-reference bone posture and the target bone posture; based on the RBF target radial function, the obtained vector distances corresponding to each non-reference bone posture are transformed to Radial function space; the target radial function is selected from the preset radial functions according to the function selection instruction; each vector distance is linearly mapped in the radial function space to determine the non-reference bone corresponding to each vector distance The weight of the posture; the weight of each non-reference bone posture is used to weight and sum the non-reference bone shape model corresponding to the non-reference bone posture to generate the target bone shape model of the target bone posture.
  • the target radial function includes: Where d is the vector distance, k is a constant, use The visual effect of the target bone shape model generated as the target radial function is more natural.
  • the server 40 communicates with a plurality of terminal devices 41 installed with animation production software or game engines through a network.
  • the network may be, but not limited to, a local area network, a metropolitan area network, or a wide area network.
  • the terminal device 41 can be a personal computer (Personal Computer, PC), a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a notebook, a mobile phone, and other terminal devices. It can also be a computer with a mobile terminal device, including various portable and pocket-sized computers.
  • the server 40 can be any background running device that can provide Internet services for managing stored data.
  • the terminal device 41 is used to install animation production software or a game engine, and display various operation interfaces of the animation production software or game engine through the display screen of the terminal device 41, and receive various operation instructions triggered by the user through the operation interface , Transmit the operating instructions to the server 40, so that the server 40 responds to various operating instructions to make an animated character, and displays the produced animated character on the display screen of the terminal device 41.
  • servers 40 may be deployed in various regions, or for load balancing, different servers 40 may serve the process of making animated characters corresponding to each terminal device 41 respectively.
  • the multiple servers 40 can share data through the blockchain, and the multiple servers 40 are equivalent to a data sharing system composed of multiple servers 40.
  • the terminal device 41 is located at a location a and communicates with the server 40, and the terminal device 41 is located at a location b, and communicates with other servers 40.
  • each server 40 in the data sharing system there is a node identifier corresponding to the server 40, and each server 40 in the data sharing system can store the node identifiers of other servers 40 in the data sharing system, so as to follow other
  • the node identifier of the server 40 broadcasts the generated block to other servers 40 in the data sharing system.
  • Each server 40 can maintain a node identification list as shown in the following table, and store the name of the server 40 and the node identification correspondingly in the node identification list.
  • the node identifier may be an IP (Internet Protocol, protocol for interconnection between networks) address and any other information that can be used to identify the node. Table 1 only uses the IP address as an example for illustration.
  • the terminal device 41 determines the posture selected by the modeler and reports the selected posture to the server 40.
  • the server 40 responds to the posture selection instruction and calls the target plug-in node according to the posture selection instruction, and the target plug-in node selects the instruction according to the posture.
  • the terminal device 41 receives the parameter input of the modeler for the target bone pose parameters of the animated character, and reports the input parameters to the server 40,
  • the server 40 determines the target bone posture according to the input parameters, and generates the target bone shape model of the target bone posture based on the target bone posture, the non-reference bone posture, and the non-reference bone shape model.
  • the bone shape model of the animated character is changed according to the bone posture of the animated character, so that the surface of the animated character is deformed.
  • the bone moves to generate a new bone pose, it will be generated according to the corresponding relationship between the bone pose and the bone shape model established in advance for the bone
  • the bone shape model corresponding to the new bone pose is not all meet the conditions for making an animated character, resulting in the bones of the animated character.
  • the shape model of the posture is unnatural and affects the visual effect; if you disable a certain skeletal posture and skeletal shape model, you need to delete the skeletal posture and the corresponding skeletal shape model. When you use it later, you need to re-establish it, which wastes animation production time. Further reduce the execution efficiency.
  • the disabled or allowed function is set for each bone pose and corresponding bone shape model established in advance, and then the target bone pose can be selected according to actual needs.
  • the non-reference bone shape and the corresponding non-reference bone shape model used in the target bone shape model of the target bone shape model, and the target bone shape model of the target bone shape of the animated character is generated according to the non-reference bone shape and the corresponding non-reference bone shape model.
  • a flow chart of a method for making animation includes the following steps:
  • Step 500 Receive the target bone pose parameters input for the target bone of the animated character, and determine the target bone pose according to the target bone pose parameters.
  • the target bone pose parameter is the target bone position information.
  • the target bone pose parameter can be the arm bend 30°.
  • the angle between the big arm and the forearm is 30°.
  • the state where the angle between the arms is 30° is the target bone pose;
  • the target bone pose parameters can also be input motion information, such as jumping, etc., which can determine the various bone poses of the animated character when the animated character is in the jumping state, for example, the Each bone posture of the animated character in the beating state can be preset.
  • the target bone shape model in the target bone pose needs to be determined.
  • the target bone shape model is generated by fusion based on a pre-set non-reference bone shape model, it is necessary to determine multiple non-reference bone shape models.
  • the multiple non-reference bone shape models can generate the target bone shape model, and can make The generated target bone shape model has a natural curve and satisfies the visual effect.
  • this process may need to be practiced many times until the generated target bone shape model curve is natural and meets the conditions such as visual effects, and then the non-reference bone pose used when the conditions are met is packaged and stored for use in the game. use.
  • Obtaining the non-reference bone shape model from the non-reference bone shape model collection stored in the plug-in node is executed after receiving the posture selection instruction of the non-reference bone pose.
  • Step 501 In response to the posture selection instruction for the non-reference bone posture, call the target plug-in node, and obtain the non-reference bone shape model corresponding to the non-reference bone posture from the non-reference bone shape model set according to the posture selection instruction.
  • a non-reference bone shape model corresponding to the non-reference bone pose is generated, and the non-reference bone shape model corresponding to the generated non-reference bone pose is stored in the plug-in node.
  • the non-reference bone shape model collection it can be used when making the target bone shape model.
  • the non-reference bone poses need to be stored correspondingly, and each bone has multiple non-reference bone poses and corresponding non-reference bone shape models stored in advance, so as to be determined according to the pose selection instruction
  • the non-reference bone shape model corresponding to the non-reference bone pose included in the pose selection command is
  • the gesture selection instruction is artificially triggered on the display interface.
  • a display interface for triggering the gesture selection instruction provided in this embodiment of the application.
  • Each non-reference bone pose parameter is correspondingly set with a function that is disabled or allowed, and each non-reference bone pose parameter corresponds to a non-reference bone. attitude.
  • FIG. 7 is a schematic diagram of a trigger posture selection instruction provided in this embodiment of the present application.
  • the pose selection instruction is used to instruct to disable the bone pose.
  • the target plug-in node selects the pose from the non-reference bone shape model.
  • the non-reference bone shape model corresponding to the non-reference bone pose is obtained from the collection, it means that the non-reference bone shape model corresponding to the non-reference bone shape that has not been instructed to be disabled is obtained from the non-reference bone shape model collection;
  • the pose selection instruction is used to instruct the use of bone poses.
  • the target plug-in node is called, and the target plug-in node obtains the non-reference bone shape model set from the non-reference bone shape model set according to the pose selection instruction.
  • the non-reference bone shape model corresponds to the reference bone posture, it means to obtain the non-reference bone shape model corresponding to the non-reference bone shape model instructed to use from the non-reference bone shape model set.
  • a delete function is also set for the non-reference bone posture displayed on the display page.
  • the posture deletion instruction is received, the non-reference bone posture corresponding to the delete instruction is deleted in response to the posture deletion instruction, and Delete the non-reference bone shape model corresponding to the non-reference bone posture stored in the non-reference bone shape model collection.
  • the delete function set can be to set a delete button for each non-reference bone pose, as shown in Figure 6; it can also be to set a delete area for each non-reference bone pose; it can also only set a delete area.
  • the delete area is valid for all non-reference bone poses. When the delete area is used, drag the non-reference bone pose that needs to be deleted to the delete area to delete.
  • the non-reference bone shape model can be displayed on the display interface, and functions such as disabling and/or allowing use, and deleting can be set, which are the same as in FIG. 6 and will not be repeated here.
  • non-reference bone poses and/or non-reference bone shape models are set on the display page, and functions such as disable, use, and delete are not set.
  • functions such as disable, use, and delete are not set.
  • this application separates the functions that are forbidden or allowed to be used and the functions to be deleted, when a certain bone pose is disabled, the bone shape model corresponding to the disabled bone pose is not used when creating an animated bone shape model, but it does not The bone shape model corresponding to the disabled bone pose will not be deleted, so when the target bone shape model of the target bone pose is made next time, the bone shape model corresponding to the disabled bone pose can still be used without regenerating and reducing operations. Process to improve execution efficiency.
  • the target bone shape model of the target bone pose is generated, wherein the target bone pose is based on the animation character in step 500
  • the parameter input command of the target bone pose parameter determines the target bone pose.
  • Step 502 Generate a target bone shape model of the target bone pose based on the obtained non-reference bone shape model of the non-reference bone pose.
  • the non-reference bone posture and the corresponding non-reference bone shape model have been determined according to the posture selection instruction of the non-reference bone posture; and the target bone posture has been determined according to the parameter input instructions of the target bone posture parameters of the animated character ;
  • the target bone shape model is generated by using the non-reference bone shape, the non-reference bone shape model, and the target bone shape.
  • the target bone shape model is generated according to the above three parameters.
  • the obtained non-reference bone poses include at least two, determine the non-reference bone shape models corresponding to the at least two non-reference bone poses;
  • any obtained non-reference bone pose determine the vector distance between the non-reference bone pose and the target bone pose, where the bone pose is a three-dimensional 3D mathematical vector;
  • the corresponding non-reference bone shape models are weighted and summed to generate the target bone shape model of the target bone posture.
  • the function value of the target radial function is proportional to the square of the vector distance, and proportional to the correspondence of the vector distance, as shown in the following formula
  • d is the vector distance and k is a constant.
  • the target radial function can also be determined by a function selection instruction. At this time, it is necessary to include at least two radial functions in the display interface. As shown in FIG. 9, the target radial function is determined Display the interface map.
  • the display interface includes at least the following optional radial functions:
  • d is the vector distance and k is a constant.
  • the method of making animation can be implemented by multiple plug-in nodes, and one plug-in node implements multiple functional steps in making animation.
  • FIG. 10 a flow chart for implementing animation production through plug-in nodes provided by this embodiment of the application. As can be seen from FIG. 10, when an animation is produced, four plug-in nodes can be used to implement the PoseDriver process in the production process.
  • the first plug-in node is the first plug-in node
  • the second plug-in node is the second plug-in node
  • the second plug-in node determines the non-reference bone posture used when generating the target bone shape model.
  • the second plug-in node is called.
  • the second plug-in node selects from the non-reference bone shape model set according to the posture selection instruction.
  • the second plug-in node is the target plug-in node provided in this embodiment of the application.
  • the second plug-in node may also perform the following functions:
  • the target bone pose and the non-reference bone pose determine the vector distance between the non-reference bone pose and the target bone pose;
  • the bone pose is a standard 3D mathematical vector, so the vector distance calculation formula can be used to determine the non-reference bone pose and
  • the vector distance between the target bone poses, and the vector distance and the non-reference bone shape model are input into the third plug-in node.
  • the third plug-in node is the third plug-in node
  • the vector distance is directly mapped to the radial function space based on the target radial function; then in the radial function space Perform linear mapping, determine the weight of the non-reference bone pose corresponding to the vector distance, and input the determined weight and the non-reference bone shape model into the fourth plug-in node.
  • the fourth node receives the weight, and performs shape fusion according to the weight of each non-reference bone pose and the non-reference bone shape model to obtain the target bone shape model.
  • FIG. 11 it is a schematic diagram of a target skeleton shape model generated based on the obtained non-reference bone shape model of the non-reference bone shape using the method for making animation in the embodiment of the present application.
  • the method for making animation selects the non-reference bone posture of the bone shape model used to make the animation from the plug-in node, which improves the flexibility in the animation character production process, and the non-reference bone posture that has not been selected is not Will be deleted, so there is no need to remake, and reduce the amount of calculation, improve execution efficiency.
  • an embodiment of the present application also provides an apparatus 1200 for making animated characters.
  • the apparatus 1200 includes: a calling unit 1201 and a generating unit 1202, wherein:
  • the calling unit 1201 is used to call the target plug-in node in response to the posture selection instruction for the non-reference bone posture, and the target plug-in node obtains the non-reference bone shape corresponding to the non-reference bone posture from the non-reference bone shape model set according to the posture selection instruction Models, where the non-reference bone shape model collection includes non-reference bone shape models corresponding to each non-reference bone pose;
  • the generating unit 1202 is configured to input instructions according to the parameter input of the target bone pose parameters of the animated character, determine the target bone pose, and generate the target bone shape model of the target bone pose based on the obtained non-reference bone shape model of the non-reference bone pose.
  • the calling unit 1201 is specifically configured to:
  • the posture selection instruction is used to instruct to disable the bone posture, obtain the non-reference bone shape model corresponding to the non-reference bone shape model that is not selected from the non-reference bone shape model set;
  • the non-reference bone shape model corresponding to the selected non-reference bone shape model is obtained from the non-reference bone shape model set.
  • the device further includes: a deleting unit 1203;
  • the deleting unit is used to delete the non-reference bone shape model corresponding to the gesture deletion instruction in response to the gesture deletion instruction.
  • the generating unit 1202 is specifically configured to:
  • any obtained non-reference bone pose determine the vector distance between the non-reference bone pose and the target bone pose, where the bone pose is a three-dimensional 3D mathematical vector;
  • the corresponding non-reference bone shape models are weighted and summed to generate the target bone shape model of the target bone posture.
  • the generating unit 1202 is further configured to:
  • the target radial function is selected from the preset radial functions according to the function selection instruction; the function value of the target radial function is proportional to the square of the vector distance and proportional to the correspondence of the vector distance.
  • the generating unit 1202 is further configured to:
  • the shape model of the target bone pose is stored as a non-reference bone shape model in the non-reference bone shape model collection.
  • each unit or module
  • the functions of each unit can be implemented in the same or multiple pieces of software or hardware.
  • a computing device for producing an animated character according to another exemplary embodiment of the present application is introduced next.
  • the computing device for making animation includes a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes the Any step in the method of making animation of various exemplary embodiments in the application.
  • FIG. 13 The production animation computing device 1300 as shown in FIG. 13 is only an example, and should not bring any limitation to the functions and scope of use of the embodiments of the present application.
  • the components of the animated character computing device 1300 may include, but are not limited to: the aforementioned at least one processor 1301, the aforementioned at least one memory 1302, and a bus 1303 connecting different system components (including the memory 1302 and the processor 1301).
  • the bus 1303 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a processor, or a local bus using any bus structure among multiple bus structures.
  • the memory 1302 may include a readable medium in the form of a volatile memory, such as a random access memory (RAM) 13021 and/or a cache memory 13022, and may further include a read-only memory (ROM) 13023.
  • RAM random access memory
  • ROM read-only memory
  • the memory 1302 may also include a program/utility 13025 having a set of (at least one) program module 13024.
  • program module 13024 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data. Each of the examples or some combination may include the realization of a network environment.
  • the animated character computing device 1300 may also communicate with one or more external devices 1304 (such as keyboards, pointing devices, etc.), and may also communicate with one or more devices that enable users to interact with the animated character computing device 1300, and/ Or communicate with any device (such as a router, modem, etc.) that enables the animated character computing device 1300 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 1305.
  • the animated character computing device 1300 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 1306. As shown in FIG.
  • the network adapter 1306 communicates with other modules used to make an animated character computing device 1300 through a bus 1303. It should be understood that although not shown in FIG. 13, other hardware and/or software modules can be used in conjunction with the animation character computing device 1300, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID Systems, tape drives, and data backup storage systems.
  • a non-volatile computer-readable storage medium which stores computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors can execute any of the foregoing.
  • the steps of the method for making animation in an embodiment.
  • various aspects of the method for making animated characters provided in this application can also be implemented in the form of a program product, which includes program code.
  • the program product runs on a computing device, the program code is used to make The computing device executes the steps in the method for producing an animated character according to various exemplary embodiments of the present application described above in this specification.
  • the program product can adopt any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the program product for making animated character generation in the embodiment of the present application can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can run on a computing device.
  • CD-ROM portable compact disk read-only memory
  • the program product of this application is not limited to this.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or combined with a command execution system, device, or device.
  • the readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the command execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
  • the program code for performing the operations of this application can be written in any combination of one or more programming languages.
  • Programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming. Language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user computing device, partly executed on the user equipment, executed as an independent software package, partly executed on the user computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device may be connected to a user computing device through any kind of network including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (for example, using an Internet service provider to Connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program commands can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a non-referenced manner, so that the commands stored in the computer-readable memory produce an article of manufacture including the command device.
  • the command device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program commands can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the command provides steps for implementing the functions specified in one flow or multiple flows in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供一种制作动画的方法、装置、计算设备及存储介质,属于计算机技术领域,用以提高制作动画的执行效率。响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,目标插件节点根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型;根据动画角色的目标骨骼姿态参数的参数输入指令,确定目标骨骼姿态,并基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型。

Description

一种制作动画的方法、装置、计算设备及存储介质
本申请要求于2020年02月04日提交中国专利局,申请号为2020100801490,申请名称为“一种制作动画的方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术,提供一种制作动画的方法、装置、计算设备及存储介质。
背景技术
动画角色广泛应用于游戏、影视中,在生产制作游戏中的动画角色或影视中的动画角色时,主要通过动画角色的骨骼姿态,确定动画角色的外形模型。
在制作动画角色时,首先在动画制作软件或游戏引擎中预先建立动画角色的基准骨骼姿态及对应的外形模型,其中外形模型类比于人类的外表皮肤,骨骼姿态类比于人类的骨骼,如图1所示,为动画角色的手臂姿态示意图。骨骼位置与外形模型要对应,比如手肘弯曲的时候上臂突起,产生肌肉突起的感觉,同时上下手臂连接的部分压缩,模拟真人肌肉挤压的样子,如图2所示,为手肘弯曲时动画角色的手臂姿态的示意图。要让动画角色看上去逼真形象,动画角色的外表姿态需要随着全身的骨骼运动做出相应的变形。
目前,在每次制作动画角色时,需要预先制作一些特定骨骼姿态,并使用所有预先制作的特定骨骼姿态进一步制作动画角色。但某些特定骨骼姿态会影响动画角色的显示效果。为了避免此类负面影响,会仅仅使用其中部分特定骨骼姿态进行制作,未被使用的特定骨骼姿态则会被删除。在这种方案下,再次使用时需要重新制作被删除的特定骨骼姿态,使得动画制作的工作量较大且效率较低。
发明内容
根据本申请的各种实施例,提供一种制作动画的方法、装置、计算设备及存 储介质。
第一方面,本申请提供一种制作动画的方法,由计算设备执行,该方法包括:
响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型,其中非基准骨骼外形模型集合中包括各个非基准骨骼姿态对应的非基准骨骼外形模型;及
根据动画角色的目标骨骼姿态参数的参数输入指令,确定目标骨骼姿态,并基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型。
第二方面,本申请提供一种制作动画觉得的装置,该装置包括:
调用单元及生成单元,其中:
调用单元,用于响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型,其中非基准骨骼外形模型集合中包括各个非基准骨骼姿态对应的非基准骨骼外形模型;及
生成单元,用于根据动画角色的目标骨骼姿态参数的参数输入指令,确定目标骨骼姿态,并基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型。
第三方面,本申请实施例提供一种制作动画的计算设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行上述制作动画的方法的步骤。
第四方面,本申请实施例提供一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述处理器执行上述制作动画的方法的步骤。
本申请有益效果如下:
本申请的一种制作动画的方法、装置及存储介质,根据非基准骨骼姿态的姿态选择指令,调用目标插件节点,并从非基准骨骼外形模型集合中获得根据姿态选择指令确定的非基准骨骼姿态对应的非基准骨骼外形模型,并基于获得的非 基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型,其中目标骨骼姿态是根据动画角色的目标骨骼姿态参数的参数输入指令,确定的;从插件节点中选取用于制作动画的骨骼外形模型的非基准骨骼姿态,提升动画角色制作过程中的灵活性,且未被选取的非基准骨骼姿态不会被删除,无需重新制作,减少计算量,提高执行效率。
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为动画角色的手臂姿态示意图;
图2为手肘弯曲时动画角色的手臂姿态的示意图;
图3为相关技术中生成非基准骨骼姿态对应的外形模型的示意图;
图4为本申请实施例提供的一种制作动画的应用场景示意图;
图5为本申请实施例提供的一种制作动画的方法流程图;
图6为本申请实施例提供的一种用于触发姿态选择指令的显示界面;
图7为本申请实施例提供的一种触发姿态选择指令的示意图;
图8为本申请实施例提供的另一种用于触发姿态选择指令的显示界面;
图9为本申请实施例提供的一种确定目标径向函数的显示界面图;
图10为本申请实施例提供的一种通过插件节点实现制作动画的流程图;
图11为本申请实施例提供的一种基于非基准骨骼外形模型生成的目标骨骼外形模型示意图;
图12为本申请实施例提供的一种制作动画的装置的结构图;及
图13为本申请实施例提供的一种制作动画的计算设备结构图。
具体实施方式
为了使本申请的目的、技术方案及有益效果更加清楚明白,以下将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,并不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
动画制作软件:制作动画角色所采用的一类软件的统称。动画制作软件包括Maya软件、Blender软件、Houdini软件等。Maya软件包括了建模、动画、渲染和特效等功能,是一种三维建模动画软件;Blender软件是一种开源的跨平台全能三维动画制作软件,包括了建模、动画、材质、渲染、音频处理及视频剪辑等动画制作方案;Houdini软件是一款三维计算机图形软件。
动画角色:由3D(Three-Dimensiona l,三维)游戏引擎或动画制作软件进行动画制作,并借助3D图形建模渲染技术绘制的虚拟角色。该虚拟角色可以是具有骨骼姿态和外表姿态的虚拟对象,比如,虚拟人物、虚拟动物。
骨骼动画:每个动画角色,至少包含了骨骼姿态和外表姿态两种数据,在游戏/影视动画制作过程中,通过骨骼姿态改变外表姿态制作的动画,称作骨骼动画。
动画角色外形Skinning:动画角色外形会随着骨骼姿态的变化而变化,因此需要定义骨骼,并定义该骨骼能驱动外形模型上的顶点,Skinning的过程就是规定所有骨骼和外形模型顶点的驱动关系的过程,当骨骼的姿态发生改变时,外形模型顶点也跟随发生变化,即动画角色的外形模型发生变化。如图1所示,假设图1所示的骨骼姿态和外形模型为初始姿态,当骨骼姿态由图1的姿态变化到图2的姿态时,外形模型也同时发生变化,即由图1的外形模型变化成图2的外形模型,外形模型变化是由于骨骼驱动了外形模型的顶点,使外形模型发生变化。
RBF(Radial Basis Function,径向基函数)算法:能在一组状态之间进行 数学插值,得到新状态的算法。
PoseDriver方法:一种借助RBF算法,读取动画角色的骨骼姿态后,得到新角色外形状态的方案。
BlendShape:形状融合变形器,是一种数据存储形式,记录了动画角色外形模型数据。
模型师:决定骨骼姿态和外形模型驱动关系的制作人员。模型师建立动画角色的骨骼姿态与动画角色的外形模型之间的对应关系,并且使骨骼姿态发生改变时候,动画角色的外形模型经由Skinning过程发生相应改变。
下面对本申请实施例的设计思想进行简要介绍。
通过动画制作软件或游戏引擎制作动画角色,主要是制作动画角色中的骨骼动画。一种方法是采用PoseDriver(姿态驱动)方法,对动画角色的外形模型做表面变形,以得到新的外形模型。本实施例中,以通过动画制作Maya软件制作动画角色举例:
表面变形是指外形模型随着骨骼姿态发生变化的过程。在表面变形时利用Skinning规定的驱动关系进行变化,外形模型就会发生基本的变化,但是变化效果不佳。为了达到更好的视觉效果,并且能达到由模型师根据经验进行定制的效果,此时在Skinning的基础上,使用PoseDriver进行变化,以得到新的外形模型。
在PoseDriver过程中,需要模型师预先定义骨骼在不同姿态下的外形模型。以上臂骨骼为例,通常定义5个基准骨骼姿态:上臂水平,上臂向前,上臂向上,上臂向下,上臂向后。
模型师在制作非基准骨骼姿态对应的外形模型时,使用预先定义的5个基准骨骼姿态及对应的基准骨骼外形模型,并在动画制作软件中进行制作完成的。在此过程中,通过使用非基准骨骼姿态以及基准骨骼姿态对应的基准骨骼外形模型,利用Skinning过程来确定非基准骨骼姿态对应的非基准骨骼外形模型。此时若模型师认为当前的非基准骨骼外形模型不能满足需求,则模型师会根据需求对非基准骨骼姿态对应的非基准骨骼外形模型进行修改,得到满足要求的非基准骨骼形模型。在得到满足要求的非基准骨骼外形模型后,经过一次反向的 坐标空间转换,把非基准骨骼外形模型变换到Skinning过程之前的坐标系下,该过程可以称为InvertShape计算,然后将通过InvertShape计算后的模型和基准骨骼姿态对应的外形模型都统一在一个坐标系统下。如图3所示,为生成非基准骨骼姿态对应的非基准骨骼外形模型的示意图。在动画制作领域,非基准骨骼姿态一般被称为特定骨骼姿态,以区别于基准骨骼姿态,相应的,非基准骨骼姿态对应的外形模型,也被称为特定骨骼外形模型。
在生成非基准骨骼姿态及对应的非基准骨骼外形模型后,利用已生成的非基准骨骼姿态及对应的非基准骨骼外形模型,可以确定任意新的骨骼姿态对应的外形模型。
以动画角色的上臂骨骼为例,确定新的骨骼姿态对应的外形模型。具体来说:
非基准骨骼姿态,一般是通过一些参数来定义的,例如动作指令参数,骨骼弯曲的角度参数等,这些参数统称为骨骼姿态参数,根据输入的骨骼姿态参数,可以确定需要生成的目标骨骼姿态。在动画制作时,当接收到针对动画角色的输入参数,动画角色上臂骨骼首先发生运动,产生新的目标骨骼姿态;进一步,根据非基准骨骼姿态对应的非基准骨骼外形模型,生成新的目标骨骼姿态对应的外形模型。
目前,无法在为新目标骨骼姿态生成外形模型的过程中禁用预先生成的非基准骨骼姿态。若禁用某个非基准骨骼姿态,则需要删除被禁用的非基准骨骼姿态及对应的非基准骨骼外形模型;因此目前的技术方案中,无法保留非基准骨骼姿态下,若要禁用某个非基准骨骼姿态,就要删除这个非基准骨骼姿态及对应的非基准骨骼外形模型,但是若要再次使用这个非基准骨骼姿态,则需要重新生成非基准骨骼姿态对应的非基准骨骼外形模型;然而,删除并重新生成非基准骨骼外形模型的工作量比较大,并且影响动画角色制作效率。
同时,若不删除某个需要禁用的非基准骨骼姿态,在根据新的骨骼姿态生成对应的外形模型时,会使用全部的非基准骨骼姿态及对应的非基准骨骼外形模型,导致影响新的骨骼姿态对应的外形模型的自然度,且采用所有的非基准骨骼姿态生成对应的外形模型,计算量大,影响制作动画角色的效率。
有鉴于此,针对上述存在的问题,本申请实施例提供了制作动画的方法、装 置、计算设备及存储介质。本申请实施例涉及人工智能(Artificial Intelligence,AI)和机器学习技术,基于人工智能中的计算机视觉(Computer Vision,CV)技术和机器学习(Machine Learning,ML)而设计。
人工智能是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能技术主要包括计算机视觉技术、语音处理技术、以及机器学习/深度学习等几大方向。
随着人工智能技术研究和进步,人工智能在多个领域展开研究和应用,例如常见的智能家居、图片检索、视频监控、智能音箱、智能营销、无人驾驶、自动驾驶、无人机、机器人、智能医疗等,相信随着技术的发展,人工智能将在更多的领域得到应用,并发挥越来越重要的价值。
计算机视觉技术是人工智能的重要应用,其研究相关的理论和技术,试图建立能够从图片、视频或者多维数据中获取信息,以代替人的视觉判读的人工智能系统。典型的计算机视觉技术通常包括图片处理和视频分析。本申请实施例提供的制作动画的方法涉及图像处理。
机器学习是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习等技术。本申请实施例在动画制作过程中,采用了基于RBF算法,和PoseDriver方法来制作动画角色中骨骼动画的方案。
本申请提供的动画制作方法中,为了保留未被选中的非基准骨骼外形模型,利用插件技术,在动画制作软件外存储所有非基准骨骼外形模型,形成非基准骨骼外形模型集合。在每次进行动画制作时,利用插件从非基准骨骼外形模型集合 中调用被选中的非基准骨骼姿态的外形模型来制作目标骨骼姿态的外形模型,从而不需要删除未被选中的非基准骨骼姿态的外形模型。基于这个原理,本申请提供的动画制作方法,每次在制作动画时,响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,根据非基准骨骼姿态的姿态选择指令,从非基准骨骼外形模型集合中,获得被选中的非基准骨骼外形模型,其中非基准骨骼外形模型集合中包含有各个非基准骨骼姿态对应的非基准骨骼外形模型;在获得非基准骨骼外形模型后,基于获得的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型,其中目标骨骼姿态的目标外形模型是根据针对动画角色的目标骨骼姿态参数的参数输入指令确定的。
在本申请中,调用目标插件节点,根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型;并基于获得的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型;可以根据实际需要选取用于制作动画角色骨骼外形模型的非基准骨骼外形模型,提升动画制作过程中的灵活性,且根据选取的非基准骨骼外形模型可以生成更加自然的外形模型,且无需使用未选取的非基准骨骼外形模型生成目标骨骼外形模型,减少计算量,提高执行效率。
同时,本申请采用插件存储非基准骨骼外形模型集合,在制作动画时,从非基准骨骼外形模型集合中获取至少一个非基准骨骼外形模型进行使用,并且未使用的非基准骨骼外形模型仍存储在非基准骨骼外形模型集合中,并未被删除,因此无需重新制作非基准骨骼外形模型,减少工作量,提升执行效率。
在一种可能的实现方式中,基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型时,可以获得至少两个非基准骨骼姿态,及对应的非基准骨骼外形模型;并针对任一非基准骨骼姿态,确定非基准骨骼姿态与目标骨骼姿态之间的向量距离;基于RBF的目标径向函数,分别将获得的各个非基准骨骼姿态对应的向量距离变换到径向函数空间;其中目标径向函数是根据函数选择指令在预设的径向函数中选择的;在径向函数空间下对 各个向量距离分别进行线性映射,确定各个向量距离对应的非基准骨骼姿态的权重;使用各个非基准骨骼姿态的权重将非基准骨骼姿态对应的非基准骨骼外形模型加权求和,生成目标骨骼姿态的目标骨骼外形模型。其中目标径向函数包括:
Figure PCTCN2020125924-appb-000001
其中d为向量距离,k为常数,使用
Figure PCTCN2020125924-appb-000002
作为目标径向函数生成的目标骨骼外形模型的视觉效果更佳自然。
在介绍完本申请实施例的设计思想之后,下面对本申请设置的应用场景进行简要说明。需要说明的是,以下场景仅用于说明本申请实施例而非限定。在具体实施时,可以根据实际需要灵活地应用本申请实施例提供的技术方案。
如图4所示,为本申请提供的一种制作动画角色的应用场景图。服务器40通过网络与多个安装有动画制作软件或游戏引擎的终端设备41进行通信连接,该网络可以是但不限于局域网、城域网或广域网等。终端设备41可以是个人电脑(Personal Computer,PC)、平板电脑、个人数字助理(Personal DigitalAssistant,PDA)、笔记本和手机等终端设备,也可以是具有移动终端设备的计算机,包括各种便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,它们能够向用户提供语音、数据或语音和数据连通性的设备,以及与无线接入网交换语音、数据或语音和数据。服务器40可以是任何能够提供互联网服务,用于管理存储数据的后台运行设备。
在本应用场景中,终端设备41用于安装动画制作软件或游戏引擎,并通过终端设备41的显示屏显示动画制作软件或游戏引擎的各个操作界面,通过操作界面接收用户触发的各种操作指令,将操作指令传输给服务器40,以使服务器40响应各种操作指令,进行动画角色制作,并将制作的动画角色显示在终端设备41的显示屏中。
在一种可能的应用场景中,为了便于降低通信时延,可以在各个地区部署服务器40,或者为了负载均衡,可以由不同的服务器40分别去服务各个终端设备41对应的制作动画角色的过程。多个服务器40,可以通过区块链实现数据的共享,多个服务器40相当于多个服务器40构成的数据共享系统。例如终端设备41位于地点a,与服务器40之间进行通信连接,终端设备41位于地点b,与其 他服务器40之间通信连接。
对于数据共享系统中的每个服务器40,均具有与该服务器40对应的节点标识,数据共享系统中的每个服务器40均可以存储有数据共享系统中其他服务器40的节点标识,以便后续根据其他服务器40的节点标识,将生成的区块广播至数据共享系统中的其他服务器40。每个服务器40中可维护一个如下表所示的节点标识列表,将服务器40名称和节点标识对应存储至该节点标识列表中。其中,节点标识可为IP(Internet Protocol,网络之间互联的协议)地址以及其他任一种能够用于标识该节点的信息,表1中仅以IP地址为例进行说明。
表1
后台服务器名称 节点标识
节点1 119.115.151.174
节点2 118.116.189.145
节点3 119.124.789.258
在本申请中,终端设备41确定模型师选择的姿态,并将选择的姿态上报给服务器40,服务器40响应姿态选择指令,根据姿态选择指令,调用目标插件节点,目标插件节点根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型;终端设备41接收到模型师针对动画角色的目标骨骼姿态参数的参数输入,并将输入的参数上报给服务器40,服务器40根据输入的参数,确定目标骨骼姿态,并基于目标骨骼姿态、非基准骨骼姿态及非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型。
基于图4论述的应用场景,下面对本申请实施例提供的制作动画角色的方法进行介绍。
当接收到针对目标骨骼输入的目标骨骼姿态参数后,确定动画角色的骨骼姿态发生变化,需要制作动画角色的目标骨骼姿态对应的目标骨骼外形模型。
由于在制作动画角色的骨骼外形模型时,是根据动画角色的骨骼姿态,来改变动画角色的骨骼外形模型,从而使动画角色产生表面变形。针对一个骨骼,预先建立该骨骼在不同姿态下的骨骼外形模型,当该骨骼发生运动,产生新骨骼姿 态时,将会根据该骨骼预先建立的骨骼姿态和骨骼外形模型之间的对应关系,生成新骨骼姿态对应的骨骼外形模型。然而,使用所有的预先建立的骨骼姿态和骨骼外形模型,计算量大,执行效率低,且预先建立的骨骼姿态和骨骼外形模型并不是全部满足制作动画角色的条件,导致产生的动画角色的骨骼姿态的外形模型不自然,影响视觉效果;若禁用掉某个骨骼姿态和骨骼外形模型,则需要删除该骨骼姿态和对应的骨骼外形模型,后续再使用时,需要重新建立,浪费动画制作时间,进一步降低了执行效率。
因此,本申请实施例中在制作动画角色的骨骼外形模型时,针对预先建立的每个骨骼姿态和对应的骨骼外形模型设置禁用或允许使用的功能,然后可以根据实际需要选择在制作目标骨骼姿态的目标骨骼外形模型时使用的非基准骨骼姿态,及对应的非基准骨骼外形模型,并根据非基准骨骼姿态及对应的非基准骨骼外形模型,生成动画角色的目标骨骼姿态的目标骨骼外形模型。
如图5所示,为本申请实施例提供的一种制作动画的方法流程图,包括如下步骤:
步骤500,接收针对动画角色的目标骨骼输入的目标骨骼姿态参数,根据目标骨骼姿态参数确定目标骨骼姿态。
其中,目标骨骼姿态参数为目标骨骼位置信息,以动画角色的胳膊为例,目标骨骼姿态参数可以为胳膊弯曲30°,此时大臂与小臂之间呈30°夹角,大臂与小臂之间呈30°夹角的状态为目标骨骼姿态;目标骨骼姿态参数还可以为输入的动作信息,比如跳动等,可以确定动画角色在跳动状态下,动画角色的各个骨骼姿态,比如,该跳动状态下动画角色的各个骨骼姿态可以是预先设置好的。
在本申请中,在确定出动画角色的目标骨骼姿态后,需要确定出在目标骨骼姿态下的目标骨骼外形模型。
由于目标骨骼外形模型是根据预先设定的非基准骨骼外形模型进行融合生成的,因此需要确定多个非基准骨骼外形模型,该多个非基准骨骼外形模型可以生成目标骨骼外形模型,且可以使生成的目标骨骼外形模型曲线自然,并满足视觉效果。
在此过程中,需要在插件节点预先存储的非基准骨骼外形模型集合中,选择 出至少一个非基准骨骼姿态对应的非基准骨骼外形模型,并根据选择出的非基准骨骼外形模型制作目标骨骼姿态对应的目标骨骼外形模型,其中非基准骨骼姿态可以是根据姿态选择指令确定的。
需要说明的是,此过程可能需要进行多次实践,直至生成的目标骨骼外形模型曲线自然,且满足视觉效果等条件,然后将满足条件时使用的非基准骨骼姿态打包存储,以便在游戏过程中使用。
从插件节点中存储的非基准骨骼外形模型集合中获取非基准骨骼外形模型,是在接收到非基准骨骼姿态的姿态选择指令后执行的。
步骤501,响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,并根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型。
在本申请中,针对一根骨骼,预先建立多个基准骨骼姿态及对应的基准骨骼外形模型,并存储;
进一步,根据预先建立的基准骨骼姿态及对应的基准骨骼外形模型,生成非基准骨骼姿态对应的非基准骨骼外形模型,并将生成的非基准骨骼姿态对应的非基准骨骼外形模型存储到插件节点的非基准骨骼外形模型集合中,以便在制作目标骨骼外形模型时使用。
需要说明的是,在存储非基准骨骼外形模型时,需要对应存储非基准骨骼姿态,并且每个骨骼预先存储有多个非基准骨骼姿态及对应的非基准骨骼外形模型,以便根据姿态选择指令确定姿态选择指令中包含的非基准骨骼姿态对应的非基准骨骼外形模型。
在本申请中,姿态选择指令是人为在显示界面上触发的,如图6所示,为本申请实施例提供的一种用于触发姿态选择指令的显示界面。从图6中可知,在显示界面中显示有多个非基准骨骼姿态参数,每个非基准骨骼姿态参数对应设置有禁用或允许使用的功能,并且每个非基准骨骼姿态参数对应一个非基准骨骼姿态。
在触发非基准骨骼姿态的姿态选择指令时,可以选择禁用某个非基准骨骼姿态,也可以选择使用某个非基准骨骼姿态;以使用某个非基准骨骼姿态进行举 例,勾选需要使用的非基准骨骼姿态对应的允许使用的功能,如图7所示,为本申请实施例提供的一种触发姿态选择指令的示意图。
在本申请中,若当针对非基准骨骼姿态勾选禁用功能时,姿态选择指令用于指示禁用骨骼姿态,此时在调用目标插件节点,目标插件节点根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型时,即是指从非基准骨骼外形模型集合中,获取未被指示禁用的非基准骨骼姿态对应的非基准骨骼外形模型;
若当针对非基准骨骼姿态勾选允许使用功能时,姿态选择指令用于指示使用骨骼姿态,此时在调用目标插件节点,目标插件节点根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型时,即是指从非基准骨骼外形模型集合中,获取被指示使用的非基准骨骼姿态对应的非基准骨骼外形模型。
在一种可能的实施例中,针对显示页面中显示的非基准骨骼姿态还设置删除的功能,当接收到姿态删除指令时,响应姿态删除指令,将删除指令对应的非基准骨骼姿态删除,并将非基准骨骼外形模型集合中存储的与非基准骨骼姿态对应的非基准骨骼外形模型删除。
其中,设置的删除功能可以是针对每个非基准骨骼姿态设置一个删除按键,如图6所示;也可以是针对每个非基准骨骼姿态设置删除区域;还可仅是设置一个删除区域,该删除区域对所有的非基准骨骼姿态有效,当使用删除区域时,将需要删除的非基准骨骼姿态拖拽到删除区域进行删除。
在一种可能的实现方式中,可以将非基准骨骼外形模型显示在显示界面上,并设置禁用和/或允许使用、及删除等功能,同图6在此不再赘述。
在一种可能的实现方式中,在显示页面中仅设置非基准骨骼姿态和/或非基准骨骼外形模型,并不设置禁用、使用、删除等功能,当点击某个非基准骨骼姿态和/或非基准骨骼外形模型时,跳转到功能界面,功能界面中包含有禁用、使用、删除等功能,如图8所示,为本申请实施例提供的另一种用于触发姿态选择指令的显示界面。
由于本申请将禁用或允许使用的功能和删除的功能分开表示,因此在禁用 某个骨骼姿态时,仅仅是在制作动画骨骼外形模型时不使用被禁用的骨骼姿态对应的骨骼外形模型,但是并不会删除该被禁用的骨骼姿态对应的骨骼外形模型,从而在下次制作目标骨骼姿态的目标骨骼外形模型时,仍可以继续使用被禁用的骨骼姿态对应的骨骼外形模型,无需重新生成,减少操作流程,提高执行效率。
在获得非基准骨骼姿态的非基准骨骼外形模型后,基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型,其中目标骨骼姿态是步骤500中根据动画角色的目标骨骼姿态参数的参数输入指令,确定的目标骨骼姿态。
步骤502,基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型。
在本申请中,已经根据非基准骨骼姿态的姿态选择指令,确定出非基准骨骼姿态以及对应的非基准骨骼外形模型;并根据动画角色的目标骨骼姿态参数的参数输入指令,确定出目标骨骼姿态;
从而在生成目标骨骼姿态的目标骨骼外形模型时,通过使用非基准骨骼姿态、非基准骨骼外形模型及目标骨骼姿态,生成目标骨骼外形模型。
在一种可能的实现方式中,基于RBF算法,根据上述三种参数生成目标骨骼外形模型。
当获得的非基准骨骼姿态包括至少两个时,确定至少两个非基准骨骼姿态对应的非基准骨骼外形模型;
然后在基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型时:
针对任一获得的非基准骨骼姿态,确定非基准骨骼姿态与目标骨骼姿态之间的向量距离,其中骨骼姿态为三维3D数学向量;
基于径向基函数RBF的目标径向函数,分别将获得的各个非基准骨骼姿态对应的向量距离变换到径向函数空间;
在径向函数空间下对各个向量距离分别进行线性映射,确定各个向量距离对应的非基准骨骼姿态的权重;
根据获得的各个非基准骨骼姿态的权重将对应的非基准骨骼外形模型加权求和,生成目标骨骼姿态的目标骨骼外形模型。
其中,目标径向函数的函数值和向量距离的平方成正比,并和向量距离的对应成正比,如下公式
Figure PCTCN2020125924-appb-000003
其中,d为向量距离,k为常数。
在一种可能的实现方式中,目标径向函数还可以通过函数选择指令确定,此时需要在显示界面中包括有至少两个径向函数,如图9所示,为确定目标径向函数的显示界面图。其中显示界面中至少包括如下可选择的径向函数:
线性函数:Ff(d)=kd
高斯函数:
Figure PCTCN2020125924-appb-000004
特殊径向函数,又称为Thin-Plate函数:
Figure PCTCN2020125924-appb-000005
其中,d为向量距离,k为常数。
在一种可能的实现方式中,制作动画的方法可以通过多个插件节点实现,一个插件节点实现制作动画中的多个功能步骤。如图10所示,为本申请实施例提供的一种通过插件节点实现制作动画的流程图,从图10中可知,在制作动画时,可以通过四个插件节点实现制作过程中的PoseDriver过程。
第一插件节点:
当需要制作动画角色时,确定针对动画角色目标骨骼姿态参数的参数输入,调用第一插件节点,第一插件节点确定目标骨骼姿态参数对应的目标骨骼姿态,并将目标骨骼姿态输入到第二插件节点中。
第二插件节点:
确定在生成目标骨骼外形模型时使用的非基准骨骼姿态,响应于对非基准骨骼姿态的姿态选择指令,调用第二插件节点,第二插件节点根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型,并将确定出的非基准骨骼姿态及对应的非基准骨骼外形模型,以及目标骨骼姿态输入到第三插件节点中;
需要说明的是,第二插件节点为本申请实施例提供的目标插件节点。
在一种可能的实现方式中,第二插件节点还可以执行下述功能:
根据目标骨骼姿态和非基准骨骼姿态,确定非基准骨骼姿态和目标骨骼姿态之间的向量距离;其中骨骼姿态为标准的3D数学向量,因此采用向量距离计算公式,可以确定出非基准骨骼姿态和目标骨骼姿态之间的向量距离,并将向量距离及非基准骨骼外形模型输入到第三插件节点中。
第三插件节点:
确定基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型时,所使用的目标径向函数,并响应函数选择指令,调用第三插件节点,确定函数选择指令对应的目标径向函数;并基于第二插件节点确定出的非基准骨骼姿态和目标骨骼姿态,确定非基准骨骼姿态和目标骨骼姿态之间的向量距离;之后基于目标径向函数将向量距离映射到径向函数空间;在径向函数空间中进行线性映射,确定向量距离对应的非基准骨骼姿态的权重,并将确定的权重及非基准骨骼外形模型输入到第四插件节点中。
在一种可能的实现方式中,若第二插件节点输入的是向量距离及非基准骨骼外形模型,则直接基于目标径向函数将向量距离映射到径向函数空间;然后在径向函数空间中进行线性映射,确定向量距离对应的非基准骨骼姿态的权重,并将确定的权重及非基准骨骼外形模型输入到第四插件节点中。
第四插件节点:
第四节点接收到权重,并根据各个非基准骨骼姿态的权重和非基准骨骼外形模型进行形状融合,获得目标骨骼外形模型。
如图11所示,为采用本申请实施例制作动画的方法,基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成的目标骨骼外形模型的示意图。
本申请实施例提供的制作动画的方法,从插件节点中选取用于制作动画的骨骼外形模型的非基准骨骼姿态,提升动画角色制作过程中的灵活性,且未被选取的非基准骨骼姿态不会被删除,从而无需重新制作,并且减少了计算量,提高执行效率。
应该理解的是,虽然上述各实施例的流程图中的各个步骤按照箭头的指示 依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述各实施例中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
基于同一发明构思,本申请实施例还提供了一种制作动画角色的装置1200,如图12所示,该装置1200包括:调用单元1201及生成单元1202,其中:
调用单元1201,用于响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,目标插件节点根据姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型,其中非基准骨骼外形模型集合中包括各个非基准骨骼姿态对应的非基准骨骼外形模型;
生成单元1202,用于根据动画角色的目标骨骼姿态参数的参数输入指令,确定目标骨骼姿态,并基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成目标骨骼姿态的目标骨骼外形模型。
在一种可能的实现方式中,调用单元1201具体用于:
若姿态选择指令用于指示禁用骨骼姿态,则从非基准骨骼外形模型集合中,获取未被选中的非基准骨骼姿态对应的非基准骨骼外形模型;
若姿态选择指令用于指示使用骨骼姿态,则从非基准骨骼外形模型集合中,获取被选中的非基准骨骼姿态对应的非基准骨骼外形模型。
在一种可能的实现方式中,该装置还包括:删除单元1203;
删除单元,用于响应于姿态删除指令,将与姿态删除指令对应的非基准骨骼外形模型删除。
在一种可能的实现方式中,生成单元1202具体用于:
针对任一获得的非基准骨骼姿态,确定非基准骨骼姿态与目标骨骼姿态之间的向量距离,其中骨骼姿态为三维3D数学向量;
基于径向基函数RBF的目标径向函数,分别将获得的各个非基准骨骼姿态对应的向量距离变换到径向函数空间;
在径向函数空间下对各个向量距离分别进行线性映射,确定各个向量距离对应的非基准骨骼姿态的权重;
根据获得的各个非基准骨骼姿态的权重将对应的非基准骨骼外形模型加权求和,生成目标骨骼姿态的目标骨骼外形模型。
在一种可能的实现方式中,生成单元1202还用于:
根据函数选择指令在预设的径向函数中选择目标径向函数;目标径向函数的函数值和向量距离的平方成正比,并和向量距离的对应成正比。
在一种可能的实现方式中,生成单元1202还用于:
将目标骨骼姿态的外形模型作为非基准骨骼外形模型存储到非基准骨骼外形模型集合中。
为了描述的方便,以上各部分按照功能划分为各单元(或模块)分别描述。当然,在实施本申请时可以把各单元(或模块)的功能在同一个或多个软件或硬件中实现。
在介绍了本申请示例性实施方式的制作动画角色的方法及装置后,接下来介绍本申请的另一示例性实施方式的制作动画角色的计算设备。
所属技术领域的技术人员能够理解,本申请的各个方面可以实现为系统、方法或程序产品。因此,本申请的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
在一种可能的实现方式中,本申请实施例提供的制作动画的计算设备,包括存储器和处理器,存储器存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行本申请中各种示例性实施方式的制作动画的方法中的任一步骤。
下面参照图13来描述根据本申请的这种实施方式的制作动画角色计算设备1300。如图13的制作动画计算设备1300仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图13所示,制作动画角色计算设备1300的组件可以包括但不限于:上述至少一个处理器1301、上述至少一个存储器1302、连接不同系统组件(包括存 储器1302和处理器1301)的总线1303。
总线1303表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器、外围总线、处理器或者使用多种总线结构中的任意总线结构的局域总线。
存储器1302可以包括易失性存储器形式的可读介质,例如随机存取存储器(RAM)13021和/或高速缓存存储器13022,还可以进一步包括只读存储器(ROM)13023。
存储器1302还可以包括具有一组(至少一个)程序模块13024的程序/实用工具13025,这样的程序模块13024包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
制作动画角色计算设备1300也可以与一个或多个外部设备1304(例如键盘、指向设备等)通信,还可与一个或者多个使得用户能与制作动画角色计算设备1300交互的设备通信,和/或与使得该制作动画角色计算设备1300能与一个或多个其它计算装置进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口1305进行。并且,制作动画角色计算设备1300还可以通过网络适配器1306与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图13所示,网络适配器1306通过总线1303与用于制作动画角色计算设备1300的其它模块通信。应当理解,尽管图13中未示出,可以结合制作动画角色计算设备1300使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
可选的,提供了一种非易失性的计算机可读存储介质,存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一实施例中的制作动画的方法的步骤。在一些可能的实施方式中,本申请提供的制作动画角色方法的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当程序产品在计算设备上运行时,程序代码用于使计算设备执行本说明书上述描述的根据本申请各种示例性实施方式的制作动画角色方法中的步骤。
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读 信号介质或者可读存储介质。可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
本申请的实施方式的制作动画角色生成的程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在计算装置上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被命令执行系统、装置或者器件使用或者与其结合使用。
可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由命令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算装置上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算装置上部分在远程计算装置上执行、或者完全在远程计算装置或服务器上执行。在涉及远程计算装置的情形中,远程计算装置可以通过任意种类的网络包括局域网(LAN)或广域网(WAN)连接到用户计算装置,或者,可以连接到外部计算装置(例如利用因特网服务提供商来通过因特网连接)。
应当注意,尽管在上文详细描述中提及了装置的若干单元或子单元,但是这种划分仅仅是示例性的并非强制性的。实际上,根据本申请的实施方式,上文描 述的两个或更多单元的特征和功能可以在一个单元中具体化。反之,上文描述的一个单元的特征和功能可以进一步划分为由多个单元来具体化。
此外,尽管在附图中以非基准顺序描述了本申请方法的操作,但是,这并非要求或者暗示必须按照该非基准顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序命令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序命令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的命令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序命令也可存储在能引导计算机或其他可编程数据处理设备以非基准方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的命令产生包括命令装置的制造品,该命令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序命令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的命令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求 意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (14)

  1. 一种制作动画的方法,由计算设备执行,该方法包括:
    响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,所述目标插件节点根据所述姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型;
    根据动画角色的目标骨骼姿态参数的参数输入指令,确定目标骨骼姿态,并基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成所述目标骨骼姿态的目标骨骼外形模型。
  2. 如权利要求1所述的方法,其特征在于,所述响应针对非基准骨骼姿态的姿态选择指令,调用目标插件节点,所述目标插件节点根据所述姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型,具体包括:
    若所述姿态选择指令用于指示禁用骨骼姿态,则从所述非基准骨骼外形模型集合中,获取未被选中的非基准骨骼姿态对应的非基准骨骼外形模型;
    若所述姿态选择指令用于指示使用骨骼姿态,则从所述非基准骨骼外形模型集合中,获取被选中的非基准骨骼姿态对应的非基准骨骼外形模型。
  3. 如权利要求1所述的方法,其特征在于,该方法还包括:
    响应于姿态删除指令,将与所述姿态删除指令对应的非基准骨骼外形模型删除。
  4. 如权利要求1所述的方法,其特征在于,所述生成所述目标骨骼姿态的目标骨骼外形模型之后,还包括:
    将所述目标骨骼姿态的目标骨骼外形模型作为非基准骨骼外形模型存储到所述非基准骨骼外形模型集合中。
  5. 如权利要求1~4任一项所述的方法,其特征在于,所述获得的非基准骨骼姿态包括至少两个,所述基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成所述目标骨骼姿态的目标骨骼外形模型,具体包括:
    针对任一获得的非基准骨骼姿态,确定所述非基准骨骼姿态与所述目标骨骼姿态之间的向量距离,其中骨骼姿态为三维3D数学向量;
    基于径向基函数RBF的目标径向函数,分别将获得的各个非基准骨骼姿态对应的向量距离变换到径向函数空间;
    在所述径向函数空间下对各个向量距离分别进行线性映射,确定各个向量距离对应的非基准骨骼姿态的权重;
    根据获得的各个非基准骨骼姿态的权重将对应的非基准骨骼外形模型加权求和,生成所述目标骨骼姿态的目标骨骼外形模型。
  6. 如权利要求5所述的方法,其特征在于,该方法还包括:
    响应函数选择指令在预设的径向函数中选择所述目标径向函数;所述目标径向函数的函数值和向量距离的平方成正比,并和所述向量距离的对应成正比。
  7. 一种制作动画的装置,该装置包括:调用单元及生成单元,其中:
    所述调用单元,用于响应于对非基准骨骼姿态的姿态选择指令,调用目标插件节点,所述目标插件节点根据所述姿态选择指令,从非基准骨骼外形模型集合中获得非基准骨骼姿态对应的非基准骨骼外形模型;
    所述生成单元,用于根据动画角色的目标骨骼姿态参数的参数输入指令,确定目标骨骼姿态,并基于获得的非基准骨骼姿态的非基准骨骼外形模型,生成所述目标骨骼姿态的目标骨骼外形模型。
  8. 如权利要求7所述的装置,其特征在于,所述调用单元具体用于:
    若所述姿态选择指令用于指示禁用骨骼姿态,则从所述非基准骨骼外形模型集合中,获取未被选中的非基准骨骼姿态对应的非基准骨骼外形模型;
    若所述姿态选择指令用于指示使用骨骼姿态,则从所述非基准骨骼外形模型集合中,获取被选中的非基准骨骼姿态对应的非基准骨骼外形模型。
  9. 如权利要求7所述的装置,其特征在于,该装置还包括:删除单元;
    所述删除单元,用于响应于姿态删除指令,将与所述姿态删除指令对应的非基准骨骼外形模型删除。
  10. 如权利要求7所述的装置,其特征在于,所述生成单元还用于:
    将所述目标骨骼姿态的目标骨骼外形模型作为非基准骨骼外形模型存储到所述非基准骨骼外形模型集合中。
  11. 如权利要求7~9任一项所述的装置,其特征在于,所述生成单元具体用于:
    针对任一获得的非基准骨骼姿态,确定所述非基准骨骼姿态与所述目标骨骼姿态之间的向量距离,其中骨骼姿态为三维3D数学向量;
    基于径向基函数RBF的目标径向函数,分别将获得的各个非基准骨骼姿态对应的向量距离变换到径向函数空间;
    在所述径向函数空间下对各个向量距离分别进行线性映射,确定各个向量距离对应的非基准骨骼姿态的权重;
    根据获得的各个非基准骨骼姿态的权重将对应的非基准骨骼外形模型加权求和,生成所述目标骨骼姿态的目标骨骼外形模型。
  12. 如权利要求11所述的装置,其特征在于,所述生成单元还用于:
    根据函数选择指令在预设的径向函数中选择所述目标径向函数;所述目标径向函数的函数值和向量距离的平方成正比,并和所述向量距离的对应成正比。
  13. 一种计算设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行权利要求1~6任一权利要求所述方法的步骤。
  14. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述处理器执行权利要求1~6任一所述方法的步骤。
PCT/CN2020/125924 2020-02-04 2020-11-02 一种制作动画的方法、装置、计算设备及存储介质 WO2021155686A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022519774A JP7394977B2 (ja) 2020-02-04 2020-11-02 アニメーションを作成する方法、装置、コンピューティング機器及び記憶媒体
KR1020227004104A KR102637513B1 (ko) 2020-02-04 2020-11-02 애니메이션 메이킹 방법 및 장치, 컴퓨팅 디바이스 및 저장 매체
US17/680,921 US11823315B2 (en) 2020-02-04 2022-02-25 Animation making method and apparatus, computing device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010080149.0A CN111260764B (zh) 2020-02-04 2020-02-04 一种制作动画的方法、装置及存储介质
CN202010080149.0 2020-02-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/680,921 Continuation US11823315B2 (en) 2020-02-04 2022-02-25 Animation making method and apparatus, computing device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021155686A1 true WO2021155686A1 (zh) 2021-08-12

Family

ID=70949227

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125924 WO2021155686A1 (zh) 2020-02-04 2020-11-02 一种制作动画的方法、装置、计算设备及存储介质

Country Status (5)

Country Link
US (1) US11823315B2 (zh)
JP (1) JP7394977B2 (zh)
KR (1) KR102637513B1 (zh)
CN (1) CN111260764B (zh)
WO (1) WO2021155686A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260764B (zh) 2020-02-04 2021-06-25 腾讯科技(深圳)有限公司 一种制作动画的方法、装置及存储介质
CN111968204B (zh) * 2020-07-28 2024-03-22 完美世界(北京)软件科技发展有限公司 一种骨骼模型的运动展示方法和装置
CN111951360B (zh) * 2020-08-14 2023-06-23 腾讯科技(深圳)有限公司 动画模型处理方法、装置、电子设备及可读存储介质
CN112076473B (zh) * 2020-09-11 2022-07-01 腾讯科技(深圳)有限公司 虚拟道具的控制方法、装置、电子设备及存储介质
CN112435314B (zh) * 2020-11-30 2023-02-24 上海米哈游天命科技有限公司 一种游戏中的防穿模方法、装置、电子设备及存储介质
CN112562043B (zh) * 2020-12-08 2023-08-08 北京百度网讯科技有限公司 图像处理方法、装置和电子设备
CN112560962B (zh) * 2020-12-17 2024-03-22 咪咕文化科技有限公司 骨骼动画的姿态匹配方法、装置、电子设备及存储介质
CN112634417B (zh) * 2020-12-25 2023-01-10 上海米哈游天命科技有限公司 一种角色动画的生成方法、装置、设备及存储介质
CN113610992B (zh) * 2021-08-04 2022-05-20 北京百度网讯科技有限公司 骨骼驱动系数确定方法、装置、电子设备及可读存储介质
CN115239890B (zh) * 2022-09-19 2023-03-14 腾讯科技(深圳)有限公司 骨骼构建方法、装置和存储介质及电子设备
WO2024127258A1 (en) * 2022-12-14 2024-06-20 Soul Machines Limited Continuous expressive behaviour in embodied agents
CN115908664B (zh) * 2023-01-09 2023-08-15 深圳泽森软件技术有限责任公司 人机交互的动画生成方法、装置、计算机设备、存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316595A1 (en) * 2016-03-29 2017-11-02 Korea Advanced Institute Of Science And Technology Method for providing rigging tool and apparatus for providing rigging tool
CN108014497A (zh) * 2017-12-06 2018-05-11 北京像素软件科技股份有限公司 物体添加方法、装置及电子设备
CN108597015A (zh) * 2018-01-08 2018-09-28 江苏辰锐网络科技有限公司 三维生物模型骨骼自动绑定系统、方法、设备及计算机程序产品
CN109621419A (zh) * 2018-12-12 2019-04-16 网易(杭州)网络有限公司 游戏角色表情的生成装置方法及装置、存储介质
CN110689604A (zh) * 2019-05-10 2020-01-14 腾讯科技(深圳)有限公司 个性化脸部模型显示方法、装置、设备及存储介质
CN111260764A (zh) * 2020-02-04 2020-06-09 腾讯科技(深圳)有限公司 一种制作动画的方法、装置及存储介质

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5004183B2 (ja) 2008-01-21 2012-08-22 サミー株式会社 画像作成装置、画像作成プログラム、及び画像作成支援プログラム
JP5527689B2 (ja) 2009-12-28 2014-06-18 独立行政法人情報通信研究機構 対象体の解剖構造解析方法及び対象体解剖構造の表示方法並びに対象体解剖構造表示装置
EP3751517A1 (en) * 2011-05-16 2020-12-16 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Fast articulated motion tracking
US9786083B2 (en) * 2011-10-07 2017-10-10 Dreamworks Animation L.L.C. Multipoint offset sampling deformation
US10319133B1 (en) * 2011-11-13 2019-06-11 Pixar Posing animation hierarchies with dynamic posing roots
KR101504103B1 (ko) * 2013-01-16 2015-03-19 계명대학교 산학협력단 깊이맵 센서를 이용한 3차원 캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법
KR101707203B1 (ko) * 2015-09-04 2017-02-15 주식회사 씨지픽셀스튜디오 관절 회전값 적용기반 컴퓨터 그래픽 캐릭터 애니메이션 파일의 변환방법
CN108475439B (zh) * 2016-02-16 2022-06-17 乐天集团股份有限公司 三维模型生成系统、三维模型生成方法和记录介质
KR101895331B1 (ko) * 2017-03-17 2018-09-05 광운대학교 산학협력단 바이페드 제어 장치 및 방법
US20180286071A1 (en) * 2017-03-30 2018-10-04 Body Surface Translations Inc. Determining anthropometric measurements of a non-stationary subject
KR101998059B1 (ko) * 2017-08-31 2019-07-09 한국과학기술원 캐릭터 애니메이션용 모션 리타겟팅 방법 및 장치
EP3759693A4 (en) 2018-02-27 2021-11-24 Magic Leap, Inc. MESH PAIRING FOR VIRTUAL AVATARS
CN108196686B (zh) * 2018-03-13 2024-01-26 北京无远弗届科技有限公司 一种手部动作姿态捕捉设备、方法及虚拟现实交互系统
US20190370537A1 (en) * 2018-05-29 2019-12-05 Umbo Cv Inc. Keypoint detection to highlight subjects of interest
US10789754B2 (en) * 2018-07-27 2020-09-29 Adobe Inc. Generating target-character-animation sequences based on style-aware puppets patterned after source-character-animation sequences
US10984609B2 (en) * 2018-11-21 2021-04-20 Electronics And Telecommunications Research Institute Apparatus and method for generating 3D avatar
CN110310350B (zh) * 2019-06-24 2021-06-11 清华大学 基于动画的动作预测生成方法和装置
CN110675474B (zh) * 2019-08-16 2023-05-02 咪咕动漫有限公司 虚拟角色模型的学习方法、电子设备和可读存储介质
CN110570500B (zh) * 2019-09-12 2023-11-21 网易(杭州)网络有限公司 角色绘制方法、装置、设备及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316595A1 (en) * 2016-03-29 2017-11-02 Korea Advanced Institute Of Science And Technology Method for providing rigging tool and apparatus for providing rigging tool
CN108014497A (zh) * 2017-12-06 2018-05-11 北京像素软件科技股份有限公司 物体添加方法、装置及电子设备
CN108597015A (zh) * 2018-01-08 2018-09-28 江苏辰锐网络科技有限公司 三维生物模型骨骼自动绑定系统、方法、设备及计算机程序产品
CN109621419A (zh) * 2018-12-12 2019-04-16 网易(杭州)网络有限公司 游戏角色表情的生成装置方法及装置、存储介质
CN110689604A (zh) * 2019-05-10 2020-01-14 腾讯科技(深圳)有限公司 个性化脸部模型显示方法、装置、设备及存储介质
CN111260764A (zh) * 2020-02-04 2020-06-09 腾讯科技(深圳)有限公司 一种制作动画的方法、装置及存储介质

Also Published As

Publication number Publication date
KR102637513B1 (ko) 2024-02-15
US11823315B2 (en) 2023-11-21
CN111260764B (zh) 2021-06-25
JP2022550167A (ja) 2022-11-30
US20220180586A1 (en) 2022-06-09
JP7394977B2 (ja) 2023-12-08
CN111260764A (zh) 2020-06-09
KR20220028127A (ko) 2022-03-08

Similar Documents

Publication Publication Date Title
WO2021155686A1 (zh) 一种制作动画的方法、装置、计算设备及存储介质
CN109377544B (zh) 一种人脸三维图像生成方法、装置和可读介质
KR102698917B1 (ko) 이미지 처리 방법 및 장치, 전자 장치 및 저장 매체
CN115049799B (zh) 3d模型和虚拟形象的生成方法和装置
CN114372356B (zh) 基于数字孪生的人工增强方法、装置及介质
CN112819971A (zh) 虚拟形象的生成方法、装置、设备和介质
US20230342942A1 (en) Image data processing method, method and apparatus for constructing digital virtual human, device, storage medium, and computer program product
CN116468831B (zh) 模型处理方法、装置、设备及存储介质
WO2023284634A1 (zh) 一种数据处理方法及相关设备
WO2024198747A1 (zh) 动作捕捉数据的处理方法、装置、设备及存储介质
CN117218300B (zh) 三维模型的构建方法、三维构建模型的训练方法及装置
CN113822965A (zh) 图像渲染处理方法、装置和设备及计算机存储介质
CN115861498A (zh) 动作捕捉的重定向方法和装置
CN116342782A (zh) 生成虚拟形象渲染模型的方法和装置
WO2024183454A1 (zh) 虚拟对象动画生成方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN118059501A (zh) 数据处理方法及装置、计算机可读存储介质和电子设备
CN112206519B (zh) 实现游戏场景环境变化的方法、装置、存储介质及计算机设备
CN115908664B (zh) 人机交互的动画生成方法、装置、计算机设备、存储介质
CN116977766A (zh) 动作补全模型的训练方法、装置、设备、介质及程序产品
CN115775300A (zh) 人体模型的重建方法、人体重建模型的训练方法及装置
CN112435316B (zh) 一种游戏中的防穿模方法、装置、电子设备及存储介质
CN114422862A (zh) 服务视频生成方法、装置、设备、存储介质和程序产品
CN117557699B (zh) 动画数据生成方法、装置、计算机设备和存储介质
JP6721926B1 (ja) コンテンツ探索インタフェイス及びインタフェイスの表示方法並びにこれらを用いたソーシャルネットワークサービス
Li et al. Application of Graph Neural Network and Virtual Reality Based on the Concept of Sustainable Design

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918024

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227004104

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022519774

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.12.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20918024

Country of ref document: EP

Kind code of ref document: A1