WO2018024089A1 - 制作动画的方法和装置 - Google Patents
制作动画的方法和装置 Download PDFInfo
- Publication number
- WO2018024089A1 WO2018024089A1 PCT/CN2017/092940 CN2017092940W WO2018024089A1 WO 2018024089 A1 WO2018024089 A1 WO 2018024089A1 CN 2017092940 W CN2017092940 W CN 2017092940W WO 2018024089 A1 WO2018024089 A1 WO 2018024089A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- template
- animation
- structure template
- animated object
- animated
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
Definitions
- the invention belongs to the field of animation technology, and in particular relates to a method and device for making animations.
- FIG. 1 shows the flow of making a frame-by-frame animation.
- S11 professional animators make each frame of the animation required.
- S12 Generate a corresponding plurality of images for each picture created by the camera.
- S13 An animation is generated by connecting each image in series. To make animations in this way, a professional animator is required to make every frame of the animation. The workload is huge, the work is repeated, and the production work is tedious and time consuming.
- FIG. 2 shows the flow of making a keyframe animation.
- S21 the key frame image required by professional animators to make animations.
- S22 Using a computer to generate a transition frame image between key frame pictures.
- S23 The key frame image and the transition frame image are connected in series to generate an animation.
- animators need to have a deep understanding of the motion patterns between the various keyframe images before they can use the computer to generate an image of the transition frame between keyframes. This method is very professional and is not suitable for ordinary users to make this kind of animation.
- embodiments of the present invention provide a method and apparatus for making an animation.
- an embodiment of the present invention provides a method for making an animation, the method comprising:
- the structure template is actuated to drive the animated object bound to the structure template to perform the corresponding action.
- an embodiment of the present invention provides an apparatus for making an animation, the apparatus comprising:
- An animation object acquisition unit for acquiring an animation object
- An animation object binding unit configured to bind the animation object to a corresponding structure template
- An animation making unit is configured to move the structural template to drive the animated object bound to the structural template to perform a corresponding action.
- an embodiment of the present invention provides an apparatus for making an animation, the apparatus comprising:
- Memory for storing material data and programs
- processor for executing the program stored by the memory, the program causing the processor to perform the following operations:
- a display for displaying an animated object to perform the corresponding action is provided.
- the user does not need to master the knowledge of the professional animation principle, and can easily create an animation that he likes and has professional effects by simple operation, and the operation is simple and convenient, vivid and interesting, and has wide application range.
- FIG. 1 is a schematic diagram of a first implementation flow of a method for making an animation in the prior art
- FIG. 2 is a schematic diagram of a second implementation flow of a method for making an animation in the prior art
- FIG. 3 is a flow chart showing a method of making an animation according to an embodiment of the invention.
- FIG. 4 is a schematic flow chart of a first embodiment of acquiring an animated object in FIG. 3;
- FIG. 5 is a schematic flow chart of a second embodiment of acquiring an animated object in FIG. 3;
- FIG. 6 is a schematic flow chart of a third embodiment of acquiring an animated object in FIG. 3;
- FIG. 7 is a schematic flow chart of the first embodiment of binding an animated object to a corresponding structural template in FIG. 3;
- FIG. 8 is a schematic flow chart of a second embodiment of binding an animated object to a corresponding structural template in FIG. 3;
- FIG. 9 is a schematic diagram showing the functional structure of an apparatus for making an animation according to an embodiment of the present invention.
- FIG. 10 is a schematic diagram showing the functional structure of a first embodiment of an animation object acquiring unit according to the present invention.
- FIG. 11 is a schematic diagram showing the functional structure of a second embodiment of an animation object acquiring unit according to the present invention.
- FIG. 12 is a schematic diagram showing the functional structure of a third embodiment of an animation object acquiring unit according to the present invention.
- FIG. 13 is a schematic diagram showing the functional structure of a first embodiment of a structural template binding unit according to the present invention.
- FIG. 14 is a schematic diagram showing the functional structure of a second embodiment of a structural template binding unit according to the present invention.
- FIG. 15 is a schematic structural view of a frame of a first embodiment of an apparatus for making an animation according to the present invention.
- Figure 16 is a schematic view showing the structure of a frame of a second embodiment of the apparatus for making an animation of the present invention.
- FIG. 3 is a flow diagram 300 of a method of making an animation, in accordance with an embodiment of the present invention.
- step S31 an animated object is acquired.
- the animated object may be an object in one or more scenes.
- the animated object may be a pedestrian on a road, a small fish in a fish tank, a cloud of the sky, or the like.
- An animated object can also be an image drawn on the drawing plane or an item placed on the drawing plane.
- the drawing plane can be a drawing card with a preset background, or a drawing card with a solid background.
- the drawing plane can be a physical plane such as paper, canvas, or desktop. In practical use, the user can draw with a pen, use material to cut and paste, use material to shape or use physical objects to create the shape of the animated object.
- the animated objects herein include, but are not limited to, characters, animals, various creatures derived from nature, artificially conceived living bodies, and objects or images that do not have life but can be artificially given actions.
- the embodiment of the present invention can form a picture by drawing on a preset card or directly placing an item on a preset card, which is not only simple and convenient, but also has a drawing plane as a background, and the picture will appear vivid, lively and enhanced. The interest of other users (especially children).
- the embodiment of the present invention can enhance the professionalism of making animation by drawing on a drawing card with a preset background.
- the embodiment of the present invention can also draw on a drawing card with a solid color as a background, which reduces the production requirement of the animation material and reduces the cost of the animation.
- the image taken by the camera may be taken by the user manually holding the camera with the camera, or automatically taken by the fixed portable device, which is not limited in this respect.
- the method for making an animation further comprises: recording an animation object to perform a series of motions frame by frame; generating an animation file according to the recorded image, or generating an animation file according to the recorded image and configuring a background and/or audio for the animation file; File; display and / or store animation files.
- the present embodiment can generate an animation file by recording an animated object frame by frame, and display and/or store the animation file in real time, which can facilitate the user to preview or repeatedly view the generated animation file.
- the embodiment can configure the background and/or the audio file for the moving picture, so that the simple animation can obtain the professional effect and enhance the user's satisfaction.
- the configured background may be the background on the preset card. Therefore, the embodiment can make the animation and the background coordinate by using the background on the preset card as an animated background, thereby further increasing the professional effect of the animation.
- the acquired animated object is bound to the corresponding structural template.
- the structure template may be a preset structure template, an automatically generated structure template, and a manually generated structure template, or may be used in combination. Since the implementation of acquiring the animated object may be different according to the type of the different structural templates, the following description will be respectively made by the embodiments of FIGS. 4, 5 and 6.
- the structure template is a preset structure template.
- step S31-11 a preset structure template is activated in response to the photographing request.
- the preset structure template may exhibit a frame structure, a contour structure, or a bone structure of the animated object.
- Animated object types can include types of people, animals, plants, and objects. Specifically, the types of animated objects may be old people, children, young men, young women, cats, dogs, fish, flowers, birds, and grasses.
- the specific animated object type can be personalized according to the needs of animation production, and the content is not limited.
- an animation material database may be set in advance, and the database may store different types of structure templates according to the type of the animation object.
- the structural template matching the animated object may be drawn based on the to-be-created animated object, and the animated object is to be compared with the animated object.
- the matching structure template is stored in the animation material database.
- the embodiment of the present invention continuously adds the material to the material library by preset the animation material library, so that the material in the material library becomes more and more rich over time, and the user can use the selected material. There will be more and more, the animation effect will be better and better, and the professionalism will become stronger and stronger.
- the photographing request may be a photographing request issued by the user to the device.
- the user can select a shooting icon by means of human-computer interaction.
- the way of human-computer interaction may include, but is not limited to, touch screen selection, mouse click, keyboard typing, clicking on the touch screen, or using text, symbol, color, etc. representation on the drawing plane or space, and then Camera entry and other methods.
- the user can hold the device, and after the animation object is substantially attached to the preset structure template under the guidance of the preset structure template, the source image including the image of the target subject is captured.
- the camera can capture multiple images of the front and back frames of the source image, which can be used for image extraction in subsequent steps.
- the preset structure template may be obtained from an existing database, or may be designed by the developer of the software system (or the user) before making the animation.
- the specific design structure template can be implemented as follows:
- a structure template is constructed by a set of data used to describe the structure of the object. Specifically, the developer can first conceive the approximate attributes of the structural template and the appearance of the corresponding animated object, and extract the handle node or the skeleton tree that can drive the movement of the structural template. While designing the structural template, the developer can also design the outline of the animated object, and use the contour as the preset area of the object, so that the user can align the image of the target object with the structure template during shooting to facilitate subsequent Animated object extraction operation.
- the handle node is usually represented by two-dimensional coordinates
- the skeleton tree is a tree structure data composed of bone data (including bone length, parent bone information and restriction information, etc.).
- the preset area can be represented by a contour vector.
- a contour vector For example, to define a structural template of a dog, first, six key points can be set, corresponding to the limbs and the head and the tail, and the root node can be set to the waist, and the waist node includes the spine. And the two bones of the tail vertebra, the spine contains Two forelimb bones and neck bones, the tail vertebra contains two hind limbs and a tail bone, from which the bone tree is formed. Next, outline the outline of a dog standing on four legs as a preset area.
- the preset structure template can be designed by the developer and stored in the database of the device for making the animation (hereinafter referred to as the device).
- the preset structure template can be stored in an APP application of a tablet or mobile phone, which can be automatically selected by the user manually or by the device.
- the manner in which the user manually selects includes various ways of inputting to the device using human-computer interaction, and the device acquires the structural template stored in the database. The way the device is automatically selected requires the developer to store the activation conditions corresponding to the structural template in the database.
- the device can extract the shape features of the animated object by means of image analysis, such as the position, the number of the highlighted portions, and the overall area and height of the outline to classify the outline of the animated object, thereby selecting which template to use. For example, the device may determine the number of legs according to the number of protruding portions below the overall contour shape of the animated object, and at the same time use the height of the shape to determine whether it is an upright state or a crawling state, thereby determining that the animated object should use an upright human structure.
- the startup of the preset structure template may guide the user to take a picture after the image of the target object is substantially matched with the preset structure template when the source image is captured.
- This design makes it easy to bind the animated object directly to the structure template later, thereby reducing the steps of aligning the animated object with the structure template, reducing the amount of data processing and processing, and improving the success rate of binding the animated object to the structural template. In turn, the effect and quality of the animation can be improved.
- the specific photographing device or device may be any computing device that includes or can be connected to the camera, including but not limited to a computer, a mobile phone, a tablet computer, various handheld computing devices, and a stationary experience device.
- the device or device may also be provided with a display device such as a display screen, a projector, etc. for previewing and ultimately rendering the animated work.
- step S31-13 the connected pixel group in the preset structure template region is extracted from the source image to generate an animated object.
- the animated object in the source image can be extracted by a method of image analysis. Specifically, contour searching and morphological processing may be performed on pixels in the preset area (for example, in the area where the preset structure template is located), and the connected pixel group is extracted as an animation object. in case If there is no pixel group that meets the criteria in the preset area, the animation object extraction fails. At this point, you can retake the source image and re-extract the animated object from the source image.
- FIG. 5 is a flow diagram 500 of a second embodiment of acquiring an animated object (ie, step S31 in FIG. 3).
- the structure template is an automatically generated structure template.
- step S31-21 the source image including the target subject is taken.
- the device can be fixed on the desktop or the ground, so that the camera in the device captures a drawing plane or a scene, and the device automatically collects and analyzes each frame of the image. For example, when it is found that there is no object motion in a drawing plane or in a scene, it can be determined that the user's drawing or placing motion is completed, and the device selects the image of the moment as the source image.
- the method of judging that there is no object motion in a certain period of time can be realized by various motion detection methods in computer image analysis, such as frame difference method, optical flow method and the like.
- step S31-22 the structure of the target subject is extracted from the source image, and the lines in the structure are simplified to form an automatically generated structure template.
- the automatically generated structural template can adopt a common shape analysis algorithm to extract the skeleton of the connected region, remove the short length of the skeleton, and simplify the skeleton line into key points by an approximate method. Fewer fold lines.
- the simplified estimated lines can be converted into different types of structural templates.
- the implementation may be: by selecting the intersection of the skeleton lines, using this as a handle node, or selecting a center intersection, This is the root node. For other intersections, a tree generated by the root node can be created as a skeleton.
- step S31-23 the connected pixel group is extracted from the source image to generate an animation object.
- various image segmentation algorithms may be used to extract the maximum connected domain pixels in the image that meet certain conditions (eg, the pixel color is uniform and located near the center of the source image).
- the background subtraction may be performed based on the source image in the before and after frames in the acquisition frame sequence to extract the foreground, and the foreground pixel is used as the animation object.
- step S31-22 and step S31-23 can change the order relationship, that is, the method can first extract the animation object and then automatically generate the structure template; and can also automatically generate the structure template and then extract the animation object, which Both methods are within the scope of protection of this embodiment.
- FIG. 6 is a flow diagram 600 of a third embodiment of acquiring an animated object (ie, step S31 in FIG. 3).
- the structural template is a manually generated structural template.
- step S31-31 a source image including an image of the target subject is photographed in response to the photographing request.
- step S31-32 the position of a key point (such as a handle node or a bone root node, a child node, etc.) is input in the image of the target subject by means of human-computer interaction (for example, clicking on the touch screen), and the key point is Connected to form a manually generated structural template.
- a key point such as a handle node or a bone root node, a child node, etc.
- the method of extracting the object by the device may be fully automatic or semi-automatic.
- the semi-automatic method is based on a fully automatic basis and is extracted by the user's guidance device.
- the user may specify a part of the outline outlined by the red watercolor pen as the object, and the device may use the pattern recognition method to classify the color of the outline, and filter out the area with the red outline in the automatically extracted pixel group, and select only these.
- the pixels in the area are used as objects.
- the above three cases may be combined to different degrees.
- the following is an example of an implementation in which a preset structure template is combined with an automatically generated structure template.
- the topology of the predefined structural templates such as quadruped, erect humans, etc.
- the length of the torso and limb of the animated object is achieved by automatic generation. Specifically, after the topological skeleton of the object is extracted, the length of the skeleton line of the corresponding part is calculated, so that the predefined skeleton is better adapted to the object.
- steps S31-33 the connected pixel group is extracted from the source image to generate an animation object.
- step S31-32 and the step S31-33 can change the order relationship, that is, the method can first extract the animation object, and then form the manually generated structure template; or form the manually generated structure template, and then extract the animation.
- the method can first extract the animation object, and then form the manually generated structure template; or form the manually generated structure template, and then extract the animation.
- Objects both of which are within the scope of protection of this embodiment.
- step S32 the animated object is bound to the corresponding structural template. This step is described in detail below by two embodiments.
- FIG. 7 is a flow diagram 700 of the first embodiment of FIG. 3 for binding an animated object to a corresponding structural template (ie, step S32 in FIG. 3).
- the structure template is composed of handle nodes.
- step S32-11 the animated object is meshed.
- the animated object is meshed, wherein the mesh shape used may be a triangle, a quadrangle or other irregular patterns.
- the mesh shape used may be a triangle, a quadrangle or other irregular patterns.
- any common meshing algorithm can be used to implement the meshing process, which is not limited in this respect.
- step S32-12 a grid point close to the handle node in the structure template is selected in the grid, and the grid point is used as a constraint point of the grid deformation, and the animation object is bound to the corresponding structure template.
- the structural template is composed of handle nodes.
- handle nodes are set in the structure template, and each handle node controls one component in the structure template.
- the function of the handle node is similar to the joint of an animal.
- the handle node of the knee joint in the human structural template can control the movement of the leg in the skeleton.
- Selecting a grid point in the grid that is close to the handle node in the structure template can be implemented by calculating the Euler distance.
- the animated object can be aligned with the structural template before the animated object is bound to the structural template.
- the structural template is automatically generated by the device or manually created by the user, the key points in the structural template are originally generated based on the current animated object, so their positions are already accurately aligned and no alignment is required. If the structural template is fully predefined, you need to move the structural template to the preset position in the animated object.
- the processing can be divided into the following two cases:
- the source image is taken by the user under the preset object area indication
- the predefined structure template in the database is already aligned with the preset object area
- the animated object and the preset The object area is consistent, which is equivalent to the manual alignment of the structural template and the animated object.
- the structural template does not need to be aligned.
- the device When the source image is not acquired under the indication of the object area, the device needs to separately calculate the coordinates, size and contour axis inclination of the animated object area and the object, and then align the required displacement, scale and angle in the coordinate system according to the calculation result. Move the object area to the position that matches the animated object by panning, zooming, rotating, etc., so that the structural template is aligned with the object.
- Figure 8 is a flow diagram of a second embodiment of binding an animated object to a corresponding structural template (i.e., step S32 in Figure 3).
- the structural template is composed of a skeleton tree.
- step S32-21 the animated object is meshed.
- step S32-22 the animation object is bound to the corresponding structure template by using the method of skinning the skeleton tree.
- the animation object may be bound to the structural model by other binding methods, and may be personalized according to the type of the structural template.
- step S33 the structure template is actuated to drive the animated object bound to the structure template to perform a corresponding action.
- the structure template can perform corresponding actions based on the preset actions.
- the structure template can be correspondingly acted upon based on preset motion rules.
- the handle node in the structure template or the node in the skeleton tree can be dragged through the human-computer interaction input mode to perform corresponding actions. The above will be explained one by one below.
- the preset action may be one, a series, one set or multiple sets of designed actions.
- One, a series, one or more sets of actions can be stored for each preset structural template.
- the selection of the animation action is done manually by the user or by random assignment of the device.
- the action can show the motion and deformation of the structural template. Specifically, it can be represented by a displacement of a handle node in a structure template in a key frame, or a movement of a bone in a key frame.
- the preset action may be that the developer pre-records the change data of the handle node or the skeleton tree in each key frame. Animation number According to the motion vector of each handle node in each key frame, or the displacement rotation of the bone in each key frame.
- the developer can draw a standard image of the animated object, bind the structural template on the standard image, drag the key points in the structural template, and drive the standard character motion to preview the animation.
- the preset action may include an action for expressing an emotion type of an object to be animated, or an action for expressing a motion type of the object to be animated.
- the implementation of the action by the action of the skeleton model to perform the corresponding action may be as follows:
- a mesh or triangle is created based on the shape of the object, and the mesh or triangle is subject to handle nodes in the structure template, resulting in movement and deformation.
- the object is attached to the grid or triangle as a texture, and the object is subject to the handle node in the structure template to form an action.
- the determination method of the vertex coordinates of the grid or the triangle piece can be realized by dividing the space into a plurality of regions, setting the coordinates of a certain point in advance, and determining other coordinates by means of functions, calculus, matrix, and the like.
- the location can be adjusted several times to achieve the best results.
- Developers can design several limb actions for each object.
- the device directly applies one or a combination of a plurality of appropriate limb movements according to the emotion type of the object, so that the skeleton structure drives the object to move and form an animation.
- Developers can also implement personalized actions through various human-computer interaction methods without using predefined actions.
- the action may be composed of a sequence of key frames storing the position of the handle node, and the position of the handle node pre-stored in the computing device and the displacement in each key frame may be freely designed by the developer.
- Animation data produced by developers can be previewed and modified by binding to standard images.
- the displayed image may not be a standard image, but an image drawn by the user, and may of course be an image obtained from other channels, such as an image downloaded from a network.
- the device maps the motion of the handle node to the object's mesh or triangle node, so that the structure template can drive the animation object according to the preset action. motion.
- Structure templates should be designed with as few handle nodes as possible, and make sentences The initial positional spacing of the shank nodes is as large as possible, so that the motion of the object has greater freedom and avoids conflicts between the limb components.
- the way to automate animations can be self-moving by handle nodes or bones according to certain rules.
- gravity motion is a kind of motion rule.
- the corresponding automatic animation method can be: the handle node or the bone has a certain quality. Under the simulated real gravity field, the handle node or the bone hangs down and falls to form an action.
- the implementation method of the automatic rule-driven node motion can be implemented by a physical engine device that is common in the market.
- the user can use the input mode of human-computer interaction, for example, dragging the handle node or the bone node with a mouse or a touch to generate an action desired by the user.
- the user can combine the way the animation is automatically created with the way the animation is made manually.
- the nodes of the structural template will always be subjected to the gravity field, and the user can exert an external force on the node in an interactive manner.
- the device can make an animation according to the effect produced by the simulation automatic and manual force superposition.
- the device applies the animation data to the structural template to drive the animation object to move.
- the implementation of the animation object driven by the structural template is also different.
- the first implementation may be that if the structural template is implemented by the handle node, the motion vector of the handle node is directly passed to the limit point bound to the node, so that the restriction point is displaced.
- the device recalculates the position of each vertex according to the change of the position of the constraint point, thereby generating a deformed mesh.
- the algorithm for calculating the position of the constrained vertex can be implemented by any reasonable method, and there is no limitation in this respect.
- the device then maps the pixel group of the animated object as a texture to the changed mesh, and completes the motion change of the animated object in this frame.
- the second implementation may be: if the structural template is implemented by a bone tree, the movement of the bone joint causes the entire bone to move, and the bone drives the skin mesh bound thereto to be displaced and deformation.
- the specific implementation method can be any common general algorithm, and there is no limitation in this aspect.
- the device can automatically loop through the animation until the user reselects another set of animation actions, or the user re-acquisitions another animation object and then automatically loops the animation.
- the preset structure template drives the object to be animated to perform corresponding actions.
- the child selects the object type by hand on the tablet - the puppy, and binds the picture of the puppy to the structural template of the puppy;
- the child selects the type of emotion by touching it on the tablet - frustration, the picture of the tablet will appear an annoyed animation of the puppy: the puppy bows his head and squats in the field.
- the child selects the outline of the object by hand on the tablet - a circle, and binds the picture of the small ball to the circular structure template;
- the child selects the type of action by hand on the tablet - the beating, the animation of the small ball will appear on the tablet screen: the small ball jumps into the air, then Fall on the ground and bounce up, then jump into the air.
- the user may be other users than children, for example, young people, middle-aged people, and elderly people.
- the animation is simple, convenient, and fun, this method will be more attractive to children who like to explore and love to create.
- These children don't need to master the animation principles that a professional animator must master. They can easily create their favorite animations with a simple selection.
- different types of structure templates may be stored in the preset animation material database by the type of the animation object.
- Object types can include people, animals, plants, and items. Specific object types can be old people, children, young men, young women, cats, dogs, fish, flowers, birds and grasses. In addition, the object type can also be personalized according to the needs of animation production, this content is not limited.
- the preset animation material database may store the structure template of the corresponding contour shape according to different contour shapes of the animation object.
- the object contour shape may be a circle, a square, a rectangle, a star, a ring, or the like. It can be understood that the contour shape of the object can be not only a planar shape but also a three-dimensional shape, and the shape of the specific contour can be personalized according to the needs of the animation, and the content is not limited in this respect.
- the types of emotions may also include: happiness, disappointment, silence, anger, grief, grief, sorrow, anger, boredom, fear, horror, respect, caress, compassion, greed, ashamedy One or more of arrogance, ashamedy, and shame.
- the emotion type of the object can be represented by the action of the object, and the user selects the emotion type of the object to call the corresponding action data. For example, if the animated object is a dog and the user chooses to be happy, the dog will wag the tail in place.
- the object type and the emotion type can also be automatically selected by the device according to the characteristics of the object, such as the height of the object, the number of legs, or the brightness and color of the body of the object.
- the automatic selection here is particularly suitable for simple situations, for example, simply dividing the object into two types: an upright walking animal and a crawling quadruped.
- the number of legs is judged simply by the number of forks under the overall shape, and the height of the body is used to determine whether it is an upright state or a crawling state, and then simply gives an upright walking or crawling action.
- the user does not need to master the knowledge of the professional animation principle, and can easily create an animation that he likes and has professional effects by simple operation, and the operation is simple and convenient. It is lively and interesting and has a wide range of applications.
- the embodiment of the present invention captures a self-created picture, and extracts an animated object in the image for making an animation, which is simple and convenient, and has a strong interest.
- the combination of painting and animation is particularly suitable for children.
- FIG. 9 is a functional structural diagram 900 of an apparatus for making an animation according to an embodiment of the present invention.
- the apparatus for making an animation may include: an animation object acquisition unit, an animation object binding unit, and an animation production unit. among them:
- the animated object acquisition unit can be used to acquire an animated object.
- An animated object binding unit can be used to bind an animated object to a corresponding structural template.
- the animation unit can make the structure template action to drive the animation object bound to the structure template to perform corresponding actions.
- FIG. 10 is a schematic diagram 1000 of a functional structure of a first embodiment of an animation object acquisition unit according to the present invention.
- the animation object acquisition unit may include: a first structure template activation module, a first image capture module, and a first animation object extraction module. among them:
- the first structural template activation module can be configured to activate the preset structural template in response to the photographing request.
- the first image capturing module may be configured to capture a source image including an image of the target subject after the image of the target subject is substantially matched with the preset structure template.
- the first animated object extraction module may be configured to extract a connected pixel group in the preset structure template region from the source image to generate an animated object.
- FIG. 11 is a schematic diagram 1100 of a functional structure of a second embodiment of an animation object acquisition unit according to the present invention.
- the animated object acquiring unit may include: a second image capturing module, a second structural template generating module, and a second animated object extracting module. among them:
- the second image capturing module may be used to capture a source image including the target subject.
- the second structural template generating module can be used to extract the structure of the target subject from the source image, and simplify the processing of the lines in the structure to form an automatically generated structural template.
- the second animated object extraction module may be configured to extract connected pixel groups from the source image to generate an animated object.
- FIG. 12 is a schematic diagram 1200 of a functional structure of a third embodiment of an animation object acquisition unit according to the present invention.
- the animated object acquiring unit may include: a third image capturing module, a third structural template generating module, and a third animated object extracting module. among them:
- the third image capturing module may be configured to capture the target subject including the target subject in response to the photographing request
- the source image of the image may be configured to capture the target subject including the target subject in response to the photographing request
- the source image of the image may be configured to capture the target subject including the target subject in response to the photographing request
- the source image of the image may be configured to input the position of the key point in the image of the target subject by means of human-computer interaction, and connect the key points to form a manually generated structure template.
- the third animated object extraction module may be configured to extract connected pixel groups from the source image to generate an animated object.
- first image capturing module the second image capturing module, the third image capturing module, and other similar modules may be implemented by using the same hardware according to actual needs, or may be implemented by using different hardware. Make restrictions.
- FIG. 13 is a schematic diagram 1300 of a functional structure of a first embodiment of a structural template binding unit according to the present invention.
- the structure template is composed of a handle node.
- the structure template binding unit may include: a first meshing processing module and a first animation object binding module. among them:
- the first meshing processing module can be used to mesh the animated objects.
- the first animated object binding module can be used to select a grid point in the grid that is close to the handle node in the structure template, and use the grid point as a constraint point of the grid deformation to bind the animated object to the corresponding structure template. on.
- FIG. 14 is a schematic diagram 1400 of a functional structure of a second embodiment of a structural template binding unit according to the present invention.
- the structure template is composed of a skeleton tree.
- the structure template binding unit may include: a second meshing processing module and a second animation object binding module. among them:
- the second meshing processing module can be used to mesh the animated objects.
- the second animated object binding module can be used to bind the animated object to the corresponding structural template by using a method of skinning the skeleton tree.
- the animation unit may be operated in one or more of the following ways: the structure template is actuated based on the preset action; the structure template is actuated based on the preset motion rule; By dragging the handle node in the structure template or the node in the skeleton tree to make the structure template act.
- the animated object may include: a target object within one or more scenes, a target image drawn on the drawing plane, and a target item placed on the drawing plane.
- the drawing plane includes: drawing a drawing card with a preset background, or drawing a card with a solid color as a background.
- the device for making an animation may further include: an animation object recording unit, Animation file generation unit and animation file display/storage unit. among them:
- the animated object recording unit can be used to record an animation object to perform an action frame by frame.
- the animation file generating unit may be configured to generate a movie file according to the recorded screen, or generate a movie file according to the recorded screen and configure a background and/or an audio file for the animation file.
- the animation file display/storage unit can be used to display and/or store animation files.
- the device for creating an animation in each of the above embodiments may be an execution body in the method of creating an animation, and each function module in the device for making an animation is a corresponding flow for implementing each method.
- a related function module can be implemented by a hardware processor.
- Each function module only needs to implement its own functions, and its specific connection relationship is not limited. Since the device for making an animation of the above embodiment corresponds to the content of the method for creating an animation, those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific workflow of each functional unit described above can be referred to. Corresponding processes in the foregoing method embodiments are not described herein again.
- Figure 15 is a schematic view of a frame structure 1500 of a first embodiment of an apparatus for making animations according to the present invention.
- the device for making an animation may include: a memory, a processor, and a display. among them:
- the memory can be used to store material data and programs.
- the processor can be used to execute a program stored in the memory, the program causing the processor to perform the following operations: acquiring an animated object; binding the animated object to the corresponding structural template; and causing the structural template to act to drive the animation bound to the structural template The object does the corresponding action.
- the display can be used to display animated objects for the corresponding series of actions.
- Figure 16 is a schematic view of a frame structure 1600 of a second embodiment of an apparatus for making animations according to the present invention.
- the apparatus may include a central processing unit (CPU) that can execute various types according to a program stored in a read only memory (ROM) or a program loaded from a storage portion into a random access memory (RAM). Proper action and handling. In the RAM, various programs and data required for device operation are also stored.
- the CPU, ROM, and RAM are connected to each other through a communication bus.
- An input/output (I/O) interface is also connected to the bus.
- the following components are connected to the I/O interface: an input portion including a keyboard, a mouse, and the like; an output portion including a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a speaker; a storage portion including a hard disk or the like; Network connection of LAN card, modem, etc.
- the communication section performs communication processing via a network such as the Internet.
- the drive is also connected to the I/O interface as needed.
- a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive as needed so that a computer program read therefrom is installed into the storage portion as needed.
- the functional blocks shown in the block diagrams described above may be implemented as hardware, software, firmware, or a combination thereof.
- hardware When implemented in hardware, it can be, for example, an electronic circuit, an application specific integrated circuit (ASIC), suitable firmware, plug-ins, function cards, and the like.
- ASIC application specific integrated circuit
- the elements of the present invention are programs or code segments that are used to perform the required tasks.
- the program or code segments can be stored in a machine readable medium or transmitted over a transmission medium or communication link through a data signal carried in the carrier.
- a "machine-readable medium” can include any medium that can store or transfer information.
- machine-readable media examples include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio frequency (RF) links, and the like.
- the code segments can be downloaded via a computer network such as the Internet, an intranet, and the like.
- each functional unit or module in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (20)
- 一种制作动画的方法,包括:获取动画对象;将所述动画对象绑定至对应的结构模板上;使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作。
- 根据权利要求1所述的方法,其中,所述结构模板为预设的结构模板,所述获取动画对象包括:响应于拍摄请求,启动所述预设的结构模板;在将目标拍摄物的图像与所述预设的结构模板基本匹配之后,拍摄包括所述目标拍摄物的图像的源图像;由所述源图像中提取处于所述预设的结构模板区域内的连通的像素群,生成所述动画对象。
- 根据权利要求1所述的方法,其中,所述结构模为自动生成的结构模板,所述获取动画对象包括:摄取包括目标拍摄物的源图像;由所述源图像中提取目标拍摄物的结构,并对所述结构中的线条进行简化处理,形成自动生成的结构模板;由所述源图像中提取连通的像素群,生成所述动画对象。
- 根据权利要求1所述的方法,其中,所述结构模版为手工生成的结构模板,所述获取动画对象包括:响应于拍摄请求,拍摄包括目标拍摄物的图像的源图像;通过人机交互的方式在所述目标拍摄物的图像中输入关键点的位置,将所述关键点连通,形成手工生成的结构模板;由所述源图像中提取连通的像素群,生成所述动画对象。
- 根据权利要求1-4中任意一项所述的方法,其中,所述结构模板由句柄节点构成,将所述动画对象绑定至对应的结构模板上包括:将所述动画对象进行网格化处理;在网格中选取与所述结构模板中的句柄节点接近的网格点,利用所述网格点作为所述网格形变的约束点,将所述动画对象绑定至对应的结构模板上。
- 根据权利要求5所述的方法,其中,所述结构模板由骨骼树构成,将所述动画对象绑定至对应的结构模板上包括:将所述动画对象进行网格化处理;利用对所述骨骼树进行蒙皮的方法,将所述动画对象绑定至对应的结构模板上。
- 根据权利要求6所述的方法,其中,使所述结构模板动作的方式为下列方式的至少一种:基于预设的动作使所述结构模板动作;基于预设的运动规则使所述结构模板动作;通过人机交互输入方式,拖动所述结构模板中的句柄节点或者骨骼树中的节点使所述结构模板动作。
- 根据权利要求1-4中任意一项所述的方法,其中,所述动画对象包括:一个或者多个场景内的目标对象、绘制在绘图平面上的目标图像、摆放在绘图平面上的目标物品,其中,所述绘图平面包括:具有预设背景的绘图卡片,或者以纯色为背景的绘图卡片。
- 根据权利要求1-4中任意一项所述的方法,还包括:逐帧录制所述动画对象做动作的画面;根据录制的画面生成动画文件;显示和/或存储所述动画文件。
- 根据权利要求1-4中任意一项所述的方法,还包括:为所述动画文件配置背景和/或音频文件。
- 一种制作动画的装置,包括:动画对象获取单元,用于获取动画对象;动画对象绑定单元,用于将所述动画对象绑定至对应的结构模板上;动画制作单元,用于使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作。
- 根据权利要求11所述的装置,其中,所述结构模板为预设的结构模板,所述动画对象获取单元包括:第一结构模板启动模块,用于响应于拍摄请求,启动所述预设的结构模板;第一图像拍摄模块,用于在将目标拍摄物的图像与所述预设的结构模板基本匹配之后,拍摄包括所述目标拍摄物的图像的源图像;第一动画对象提取模块,用于由所述源图像中提取处于所述预设的结构模板区域内的连通的像素群,生成所述动画对象。
- 根据权利要求11所述的装置,其中,所述结构模为自动生成的结构模板,所述动画对象获取单元包括:第二图像拍摄模块,用于摄取包括目标拍摄物的源图像;第二结构模板生成模块,用于由所述源图像中提取目标拍摄物的结构,并对所述结构中的线条进行简化处理,形成自动生成的结构模板;第二动画对象提取模块,用于由所述源图像中提取连通的像素群,生成所述动画对象。
- 根据权利要求11所述的装置,其中,所述结构模板为手工生成的结构模板,所述动画对象获取单元包括:第三图像拍摄模块,用于响应于拍摄请求,拍摄包括目标拍摄物的图像的源图像;第三结构模板生成模块,用于通过人机交互的方式在所述目标拍摄物的图像中输入关键点的位置,将所述关键点连通,形成手工生成的结构模板;第三动画对象提取模块,用于由所述源图像中提取连通的像素群,生成所述动画对象。
- 根据权利要求11-14中任意一项所述的装置,其中,所述结构模板由句柄节点构成,所述结构模板绑定单元包括:第一网格化处理模块,用于将所述动画对象进行网格化处理;第一动画对象绑定模块,用于在网格中选取与所述结构模板中的句柄节点接近的网格点,利用所述网格点作为所述网格形变的约束点,将所述动画对象绑定至对应的结构模板上。
- 根据权利要求15所述的装置,其中,所述结构模板由骨骼树构成,所述结构模板绑定单元包括:第二网格化处理模块,用于将所述动画对象进行网格化处理;第二动画对象绑定模块,用于利用对所述骨骼树进行蒙皮的方法,将所述动画对象绑定至对应的结构模板上。
- 根据权利要求16所述的装置,其中,所述动画制作单元动作的方式为下列方式的至少一种:基于预设的动作使所述结构模板动作;基于预设的运动规则使所述结构模板做动作;通过人机交互输入方式,拖动所述结构模板中的句柄节点或者骨骼树中的节点使所述结构模板做动作。
- 根据权利要求11-14中任意一项所述的装置,其中,所述动画对象包括:一个或者多个场景内的目标对象、绘制在绘图平面上的目标图像、摆放在绘图平面上的目标物品,其中,所述绘图平面包括:具有预设背景的绘图卡片,或者以纯色为背景的绘图卡片。
- 根据权利要求11-14中任意一项所述的装置,还包括:动画对象录制单元,用于逐帧录制所述动画对象做动作的画面;动画文件生成单元,用于根据录制的画面生成动画文件,或者根据录制的画面生成动画文件并为所述动画文件配置背景和/或音频文件;动画文件显示/存储单元,用于显示和/或存储所述动画文件。
- 一种制作动画的装置,包括:存储器,用于存放素材数据和程序;处理器,用于执行所述存储器存储的程序,所述程序使得所述处理器执行以下操作:获取动画对象;将所述动画对象绑定至对应的结构模板上;使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作;显示器,用于显示所述动画对象做相应的动作。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019524499A JP2019528544A (ja) | 2016-08-01 | 2017-07-14 | 動画を制作する方法及び装置 |
EP17836271.1A EP3471062A4 (en) | 2016-08-01 | 2017-07-14 | ANIMATION GENERATION METHOD AND DEVICE |
US16/318,202 US20190251730A1 (en) | 2016-08-01 | 2017-07-14 | Method and apparatus for making an animation |
KR1020197003184A KR20190025691A (ko) | 2016-08-01 | 2017-07-14 | 동영상을 제작하는 방법 및 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610622304.0A CN106251389B (zh) | 2016-08-01 | 2016-08-01 | 制作动画的方法和装置 |
CN201610622304.0 | 2016-08-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018024089A1 true WO2018024089A1 (zh) | 2018-02-08 |
Family
ID=57605851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/092940 WO2018024089A1 (zh) | 2016-08-01 | 2017-07-14 | 制作动画的方法和装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20190251730A1 (zh) |
EP (1) | EP3471062A4 (zh) |
JP (1) | JP2019528544A (zh) |
KR (1) | KR20190025691A (zh) |
CN (1) | CN106251389B (zh) |
WO (1) | WO2018024089A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445558A (zh) * | 2020-03-23 | 2020-07-24 | 华强方特(深圳)动漫有限公司 | 一种应用Alembic格式的三维制作方法 |
CN112184863A (zh) * | 2020-10-21 | 2021-01-05 | 网易(杭州)网络有限公司 | 一种动画数据的处理方法和装置 |
CN111951360B (zh) * | 2020-08-14 | 2023-06-23 | 腾讯科技(深圳)有限公司 | 动画模型处理方法、装置、电子设备及可读存储介质 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106251389B (zh) * | 2016-08-01 | 2019-12-24 | 北京小小牛创意科技有限公司 | 制作动画的方法和装置 |
CN107391144B (zh) * | 2017-07-27 | 2020-01-03 | 武汉斗鱼网络科技有限公司 | 视图展示方法及装置 |
CN108170782A (zh) * | 2017-12-26 | 2018-06-15 | 郑州威科姆科技股份有限公司 | 一种教学动画资源批量生成系统 |
CN108921919A (zh) * | 2018-06-08 | 2018-11-30 | 北京小小牛创意科技有限公司 | 动画展示、制作方法及装置 |
CN111640176A (zh) | 2018-06-21 | 2020-09-08 | 华为技术有限公司 | 一种物体建模运动方法、装置与设备 |
CN109684487A (zh) * | 2018-11-06 | 2019-04-26 | 北京小小牛创意科技有限公司 | 媒体文件及其生成方法和播放方法 |
US10643365B1 (en) * | 2018-11-20 | 2020-05-05 | Adobe Inc. | Deformation mesh control for a computer animated artwork |
CN110211208A (zh) * | 2019-06-06 | 2019-09-06 | 山西师范大学 | 一种3dmax动画辅助制作系统 |
CN113345057A (zh) * | 2020-02-18 | 2021-09-03 | 京东方科技集团股份有限公司 | 动画形象的生成方法、设备及存储介质 |
CN111968201A (zh) * | 2020-08-11 | 2020-11-20 | 深圳市前海手绘科技文化有限公司 | 一种基于手绘素材的手绘动画素材生成方法 |
CN112991500A (zh) * | 2021-03-12 | 2021-06-18 | 广东三维家信息科技有限公司 | 一种家装影视动画方法、装置、电子设备及存储介质 |
CN113050795A (zh) * | 2021-03-24 | 2021-06-29 | 北京百度网讯科技有限公司 | 虚拟形象的生成方法及装置 |
CN113546415B (zh) * | 2021-08-11 | 2024-03-29 | 北京字跳网络技术有限公司 | 剧情动画播放方法、生成方法、终端、装置及设备 |
CN114642863A (zh) * | 2022-03-16 | 2022-06-21 | 温州大学 | 一种用于幼儿园的户外体育游戏系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271593A (zh) * | 2008-04-03 | 2008-09-24 | 石家庄市桥西区深度动画工作室 | 一种3Dmax动画辅助制作系统 |
US20090153569A1 (en) * | 2007-12-17 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method for tracking head motion for 3D facial model animation from video stream |
CN101968892A (zh) * | 2009-07-28 | 2011-02-09 | 上海冰动信息技术有限公司 | 根据一张人脸照片自动调整三维人脸模型的方法 |
US20120218262A1 (en) * | 2009-10-15 | 2012-08-30 | Yeda Research And Development Co. Ltd. | Animation of photo-images via fitting of combined models |
WO2012167475A1 (zh) * | 2011-07-12 | 2012-12-13 | 华为技术有限公司 | 生成形体动画的方法及装置 |
CN104408775A (zh) * | 2014-12-19 | 2015-03-11 | 哈尔滨工业大学 | 基于深度感知的三维皮影戏制作方法 |
CN105608934A (zh) * | 2015-12-21 | 2016-05-25 | 大连新锐天地传媒有限公司 | Ar儿童故事早教舞台剧系统 |
CN106251389A (zh) * | 2016-08-01 | 2016-12-21 | 北京小小牛创意科技有限公司 | 制作动画的方法和装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200540732A (en) * | 2004-06-04 | 2005-12-16 | Bextech Inc | System and method for automatically generating animation |
WO2009031155A2 (en) * | 2007-09-06 | 2009-03-12 | Yeda Research And Development Co. Ltd. | Modelization of objects in images |
US8565476B2 (en) * | 2009-01-30 | 2013-10-22 | Microsoft Corporation | Visual target tracking |
US9424811B2 (en) * | 2013-03-15 | 2016-08-23 | Crayola Llc | Digital collage creation kit |
US20160019708A1 (en) * | 2014-07-17 | 2016-01-21 | Crayola, Llc | Armature and Character Template for Motion Animation Sequence Generation |
CN105447047B (zh) * | 2014-09-02 | 2019-03-15 | 阿里巴巴集团控股有限公司 | 建立拍照模板数据库、提供拍照推荐信息的方法及装置 |
CN104978758A (zh) * | 2015-06-29 | 2015-10-14 | 世优(北京)科技有限公司 | 基于用户创作的图像的动画视频生成方法和装置 |
CN105204859B (zh) * | 2015-09-24 | 2018-09-25 | 广州视睿电子科技有限公司 | 动画管理方法及其系统 |
CN105447896A (zh) * | 2015-11-14 | 2016-03-30 | 华中师范大学 | 一种幼儿动画创作系统 |
CN105446682A (zh) * | 2015-11-17 | 2016-03-30 | 厦门正景智能工程有限公司 | 一种通过投影将儿童涂画转换为动画仿真互动展示系统 |
-
2016
- 2016-08-01 CN CN201610622304.0A patent/CN106251389B/zh not_active Expired - Fee Related
-
2017
- 2017-07-14 JP JP2019524499A patent/JP2019528544A/ja active Pending
- 2017-07-14 WO PCT/CN2017/092940 patent/WO2018024089A1/zh unknown
- 2017-07-14 KR KR1020197003184A patent/KR20190025691A/ko not_active Application Discontinuation
- 2017-07-14 EP EP17836271.1A patent/EP3471062A4/en not_active Withdrawn
- 2017-07-14 US US16/318,202 patent/US20190251730A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090153569A1 (en) * | 2007-12-17 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method for tracking head motion for 3D facial model animation from video stream |
CN101271593A (zh) * | 2008-04-03 | 2008-09-24 | 石家庄市桥西区深度动画工作室 | 一种3Dmax动画辅助制作系统 |
CN101968892A (zh) * | 2009-07-28 | 2011-02-09 | 上海冰动信息技术有限公司 | 根据一张人脸照片自动调整三维人脸模型的方法 |
US20120218262A1 (en) * | 2009-10-15 | 2012-08-30 | Yeda Research And Development Co. Ltd. | Animation of photo-images via fitting of combined models |
WO2012167475A1 (zh) * | 2011-07-12 | 2012-12-13 | 华为技术有限公司 | 生成形体动画的方法及装置 |
CN104408775A (zh) * | 2014-12-19 | 2015-03-11 | 哈尔滨工业大学 | 基于深度感知的三维皮影戏制作方法 |
CN105608934A (zh) * | 2015-12-21 | 2016-05-25 | 大连新锐天地传媒有限公司 | Ar儿童故事早教舞台剧系统 |
CN106251389A (zh) * | 2016-08-01 | 2016-12-21 | 北京小小牛创意科技有限公司 | 制作动画的方法和装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3471062A4 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445558A (zh) * | 2020-03-23 | 2020-07-24 | 华强方特(深圳)动漫有限公司 | 一种应用Alembic格式的三维制作方法 |
CN111445558B (zh) * | 2020-03-23 | 2023-05-16 | 华强方特(深圳)动漫有限公司 | 一种应用Alembic格式的三维制作方法 |
CN111951360B (zh) * | 2020-08-14 | 2023-06-23 | 腾讯科技(深圳)有限公司 | 动画模型处理方法、装置、电子设备及可读存储介质 |
CN112184863A (zh) * | 2020-10-21 | 2021-01-05 | 网易(杭州)网络有限公司 | 一种动画数据的处理方法和装置 |
CN112184863B (zh) * | 2020-10-21 | 2024-03-15 | 网易(杭州)网络有限公司 | 一种动画数据的处理方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3471062A4 (en) | 2020-03-11 |
CN106251389A (zh) | 2016-12-21 |
KR20190025691A (ko) | 2019-03-11 |
US20190251730A1 (en) | 2019-08-15 |
EP3471062A1 (en) | 2019-04-17 |
CN106251389B (zh) | 2019-12-24 |
JP2019528544A (ja) | 2019-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018024089A1 (zh) | 制作动画的方法和装置 | |
US12094045B2 (en) | Generating a background that allows a first avatar to take part in an activity with a second avatar | |
US10776981B1 (en) | Entertaining mobile application for animating a single image of a human body and applying effects | |
US10078325B2 (en) | Systems and methods for designing programmable parts for models and optimizing 3D printing | |
KR20210019552A (ko) | 객체 모델링 및 움직임 방법 및 장치, 그리고 기기 | |
CN108062796B (zh) | 基于移动终端的手工制品与虚拟现实体验系统及方法 | |
US10553009B2 (en) | Automatically generating quadruped locomotion controllers | |
CN112669414B (zh) | 动画数据的处理方法及装置、存储介质、计算机设备 | |
Smith et al. | A method for animating children’s drawings of the human figure | |
CN116342763A (zh) | 智能多模态动画创作系统及创作方法 | |
Pantuwong | A tangible interface for 3D character animation using augmented reality technology | |
Yao et al. | ShadowMaker: Sketch-Based Creation Tool for Digital Shadow Puppetry | |
KR20210134229A (ko) | 이미지 증강을 위한 방법 및 전자 장치 | |
Gouvatsos | 3D storyboarding for modern animation. | |
Cai et al. | Immersive interactive virtual fish swarm simulation based on infrared sensors | |
Wang et al. | Animation Generation Technology Based on Deep Learning: Opportunities and Challenges | |
KR20190109639A (ko) | 인공지능을 이용한 3d 어플리케이션 생성 방법 및 장치 | |
US20220335674A1 (en) | Hierarchies to generate animation control rigs | |
Figueroa et al. | A pen and paper interface for animation creation | |
Kundert-Gibbs et al. | Maya® Secrets of the ProsTM | |
Shiratori | User Interfaces for Character Animation and Character Interaction | |
PENG | Sketch2Motion: Sketch-Based Interface for Human Motions Retrieval and Character Animation | |
WO2022071810A1 (en) | Method for operating a character rig in an image-generation system using constraints on reference nodes | |
WO2020261341A1 (ja) | グラフィックゲームプログラム | |
Dean | Unity Character Animation with Mecanim |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17836271 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019524499 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017836271 Country of ref document: EP Effective date: 20190111 |
|
ENP | Entry into the national phase |
Ref document number: 20197003184 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |