WO2018024089A1 - 制作动画的方法和装置 - Google Patents

制作动画的方法和装置 Download PDF

Info

Publication number
WO2018024089A1
WO2018024089A1 PCT/CN2017/092940 CN2017092940W WO2018024089A1 WO 2018024089 A1 WO2018024089 A1 WO 2018024089A1 CN 2017092940 W CN2017092940 W CN 2017092940W WO 2018024089 A1 WO2018024089 A1 WO 2018024089A1
Authority
WO
WIPO (PCT)
Prior art keywords
template
animation
structure template
animated object
animated
Prior art date
Application number
PCT/CN2017/092940
Other languages
English (en)
French (fr)
Inventor
曹翔
时陶
徐文昌
Original Assignee
北京小小牛创意科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小小牛创意科技有限公司 filed Critical 北京小小牛创意科技有限公司
Priority to KR1020197003184A priority Critical patent/KR20190025691A/ko
Priority to US16/318,202 priority patent/US20190251730A1/en
Priority to EP17836271.1A priority patent/EP3471062A4/en
Priority to JP2019524499A priority patent/JP2019528544A/ja
Publication of WO2018024089A1 publication Critical patent/WO2018024089A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the invention belongs to the field of animation technology, and in particular relates to a method and device for making animations.
  • FIG. 1 shows the flow of making a frame-by-frame animation.
  • S11 professional animators make each frame of the animation required.
  • S12 Generate a corresponding plurality of images for each picture created by the camera.
  • S13 An animation is generated by connecting each image in series. To make animations in this way, a professional animator is required to make every frame of the animation. The workload is huge, the work is repeated, and the production work is tedious and time consuming.
  • FIG. 2 shows the flow of making a keyframe animation.
  • S21 the key frame image required by professional animators to make animations.
  • S22 Using a computer to generate a transition frame image between key frame pictures.
  • S23 The key frame image and the transition frame image are connected in series to generate an animation.
  • animators need to have a deep understanding of the motion patterns between the various keyframe images before they can use the computer to generate an image of the transition frame between keyframes. This method is very professional and is not suitable for ordinary users to make this kind of animation.
  • embodiments of the present invention provide a method and apparatus for making an animation.
  • an embodiment of the present invention provides a method for making an animation, the method comprising:
  • the structure template is actuated to drive the animated object bound to the structure template to perform the corresponding action.
  • an embodiment of the present invention provides an apparatus for making an animation, the apparatus comprising:
  • An animation object acquisition unit for acquiring an animation object
  • An animation object binding unit configured to bind the animation object to a corresponding structure template
  • An animation making unit is configured to move the structural template to drive the animated object bound to the structural template to perform a corresponding action.
  • an embodiment of the present invention provides an apparatus for making an animation, the apparatus comprising:
  • Memory for storing material data and programs
  • processor for executing the program stored by the memory, the program causing the processor to perform the following operations:
  • a display for displaying an animated object to perform the corresponding action is provided.
  • the user does not need to master the knowledge of the professional animation principle, and can easily create an animation that he likes and has professional effects by simple operation, and the operation is simple and convenient, vivid and interesting, and has wide application range.
  • FIG. 1 is a schematic diagram of a first implementation flow of a method for making an animation in the prior art
  • FIG. 2 is a schematic diagram of a second implementation flow of a method for making an animation in the prior art
  • FIG. 3 is a flow chart showing a method of making an animation according to an embodiment of the invention.
  • FIG. 4 is a schematic flow chart of a first embodiment of acquiring an animated object in FIG. 3;
  • FIG. 5 is a schematic flow chart of a second embodiment of acquiring an animated object in FIG. 3;
  • FIG. 6 is a schematic flow chart of a third embodiment of acquiring an animated object in FIG. 3;
  • FIG. 7 is a schematic flow chart of the first embodiment of binding an animated object to a corresponding structural template in FIG. 3;
  • FIG. 8 is a schematic flow chart of a second embodiment of binding an animated object to a corresponding structural template in FIG. 3;
  • FIG. 9 is a schematic diagram showing the functional structure of an apparatus for making an animation according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing the functional structure of a first embodiment of an animation object acquiring unit according to the present invention.
  • FIG. 11 is a schematic diagram showing the functional structure of a second embodiment of an animation object acquiring unit according to the present invention.
  • FIG. 12 is a schematic diagram showing the functional structure of a third embodiment of an animation object acquiring unit according to the present invention.
  • FIG. 13 is a schematic diagram showing the functional structure of a first embodiment of a structural template binding unit according to the present invention.
  • FIG. 14 is a schematic diagram showing the functional structure of a second embodiment of a structural template binding unit according to the present invention.
  • FIG. 15 is a schematic structural view of a frame of a first embodiment of an apparatus for making an animation according to the present invention.
  • Figure 16 is a schematic view showing the structure of a frame of a second embodiment of the apparatus for making an animation of the present invention.
  • FIG. 3 is a flow diagram 300 of a method of making an animation, in accordance with an embodiment of the present invention.
  • step S31 an animated object is acquired.
  • the animated object may be an object in one or more scenes.
  • the animated object may be a pedestrian on a road, a small fish in a fish tank, a cloud of the sky, or the like.
  • An animated object can also be an image drawn on the drawing plane or an item placed on the drawing plane.
  • the drawing plane can be a drawing card with a preset background, or a drawing card with a solid background.
  • the drawing plane can be a physical plane such as paper, canvas, or desktop. In practical use, the user can draw with a pen, use material to cut and paste, use material to shape or use physical objects to create the shape of the animated object.
  • the animated objects herein include, but are not limited to, characters, animals, various creatures derived from nature, artificially conceived living bodies, and objects or images that do not have life but can be artificially given actions.
  • the embodiment of the present invention can form a picture by drawing on a preset card or directly placing an item on a preset card, which is not only simple and convenient, but also has a drawing plane as a background, and the picture will appear vivid, lively and enhanced. The interest of other users (especially children).
  • the embodiment of the present invention can enhance the professionalism of making animation by drawing on a drawing card with a preset background.
  • the embodiment of the present invention can also draw on a drawing card with a solid color as a background, which reduces the production requirement of the animation material and reduces the cost of the animation.
  • the image taken by the camera may be taken by the user manually holding the camera with the camera, or automatically taken by the fixed portable device, which is not limited in this respect.
  • the method for making an animation further comprises: recording an animation object to perform a series of motions frame by frame; generating an animation file according to the recorded image, or generating an animation file according to the recorded image and configuring a background and/or audio for the animation file; File; display and / or store animation files.
  • the present embodiment can generate an animation file by recording an animated object frame by frame, and display and/or store the animation file in real time, which can facilitate the user to preview or repeatedly view the generated animation file.
  • the embodiment can configure the background and/or the audio file for the moving picture, so that the simple animation can obtain the professional effect and enhance the user's satisfaction.
  • the configured background may be the background on the preset card. Therefore, the embodiment can make the animation and the background coordinate by using the background on the preset card as an animated background, thereby further increasing the professional effect of the animation.
  • the acquired animated object is bound to the corresponding structural template.
  • the structure template may be a preset structure template, an automatically generated structure template, and a manually generated structure template, or may be used in combination. Since the implementation of acquiring the animated object may be different according to the type of the different structural templates, the following description will be respectively made by the embodiments of FIGS. 4, 5 and 6.
  • the structure template is a preset structure template.
  • step S31-11 a preset structure template is activated in response to the photographing request.
  • the preset structure template may exhibit a frame structure, a contour structure, or a bone structure of the animated object.
  • Animated object types can include types of people, animals, plants, and objects. Specifically, the types of animated objects may be old people, children, young men, young women, cats, dogs, fish, flowers, birds, and grasses.
  • the specific animated object type can be personalized according to the needs of animation production, and the content is not limited.
  • an animation material database may be set in advance, and the database may store different types of structure templates according to the type of the animation object.
  • the structural template matching the animated object may be drawn based on the to-be-created animated object, and the animated object is to be compared with the animated object.
  • the matching structure template is stored in the animation material database.
  • the embodiment of the present invention continuously adds the material to the material library by preset the animation material library, so that the material in the material library becomes more and more rich over time, and the user can use the selected material. There will be more and more, the animation effect will be better and better, and the professionalism will become stronger and stronger.
  • the photographing request may be a photographing request issued by the user to the device.
  • the user can select a shooting icon by means of human-computer interaction.
  • the way of human-computer interaction may include, but is not limited to, touch screen selection, mouse click, keyboard typing, clicking on the touch screen, or using text, symbol, color, etc. representation on the drawing plane or space, and then Camera entry and other methods.
  • the user can hold the device, and after the animation object is substantially attached to the preset structure template under the guidance of the preset structure template, the source image including the image of the target subject is captured.
  • the camera can capture multiple images of the front and back frames of the source image, which can be used for image extraction in subsequent steps.
  • the preset structure template may be obtained from an existing database, or may be designed by the developer of the software system (or the user) before making the animation.
  • the specific design structure template can be implemented as follows:
  • a structure template is constructed by a set of data used to describe the structure of the object. Specifically, the developer can first conceive the approximate attributes of the structural template and the appearance of the corresponding animated object, and extract the handle node or the skeleton tree that can drive the movement of the structural template. While designing the structural template, the developer can also design the outline of the animated object, and use the contour as the preset area of the object, so that the user can align the image of the target object with the structure template during shooting to facilitate subsequent Animated object extraction operation.
  • the handle node is usually represented by two-dimensional coordinates
  • the skeleton tree is a tree structure data composed of bone data (including bone length, parent bone information and restriction information, etc.).
  • the preset area can be represented by a contour vector.
  • a contour vector For example, to define a structural template of a dog, first, six key points can be set, corresponding to the limbs and the head and the tail, and the root node can be set to the waist, and the waist node includes the spine. And the two bones of the tail vertebra, the spine contains Two forelimb bones and neck bones, the tail vertebra contains two hind limbs and a tail bone, from which the bone tree is formed. Next, outline the outline of a dog standing on four legs as a preset area.
  • the preset structure template can be designed by the developer and stored in the database of the device for making the animation (hereinafter referred to as the device).
  • the preset structure template can be stored in an APP application of a tablet or mobile phone, which can be automatically selected by the user manually or by the device.
  • the manner in which the user manually selects includes various ways of inputting to the device using human-computer interaction, and the device acquires the structural template stored in the database. The way the device is automatically selected requires the developer to store the activation conditions corresponding to the structural template in the database.
  • the device can extract the shape features of the animated object by means of image analysis, such as the position, the number of the highlighted portions, and the overall area and height of the outline to classify the outline of the animated object, thereby selecting which template to use. For example, the device may determine the number of legs according to the number of protruding portions below the overall contour shape of the animated object, and at the same time use the height of the shape to determine whether it is an upright state or a crawling state, thereby determining that the animated object should use an upright human structure.
  • the startup of the preset structure template may guide the user to take a picture after the image of the target object is substantially matched with the preset structure template when the source image is captured.
  • This design makes it easy to bind the animated object directly to the structure template later, thereby reducing the steps of aligning the animated object with the structure template, reducing the amount of data processing and processing, and improving the success rate of binding the animated object to the structural template. In turn, the effect and quality of the animation can be improved.
  • the specific photographing device or device may be any computing device that includes or can be connected to the camera, including but not limited to a computer, a mobile phone, a tablet computer, various handheld computing devices, and a stationary experience device.
  • the device or device may also be provided with a display device such as a display screen, a projector, etc. for previewing and ultimately rendering the animated work.
  • step S31-13 the connected pixel group in the preset structure template region is extracted from the source image to generate an animated object.
  • the animated object in the source image can be extracted by a method of image analysis. Specifically, contour searching and morphological processing may be performed on pixels in the preset area (for example, in the area where the preset structure template is located), and the connected pixel group is extracted as an animation object. in case If there is no pixel group that meets the criteria in the preset area, the animation object extraction fails. At this point, you can retake the source image and re-extract the animated object from the source image.
  • FIG. 5 is a flow diagram 500 of a second embodiment of acquiring an animated object (ie, step S31 in FIG. 3).
  • the structure template is an automatically generated structure template.
  • step S31-21 the source image including the target subject is taken.
  • the device can be fixed on the desktop or the ground, so that the camera in the device captures a drawing plane or a scene, and the device automatically collects and analyzes each frame of the image. For example, when it is found that there is no object motion in a drawing plane or in a scene, it can be determined that the user's drawing or placing motion is completed, and the device selects the image of the moment as the source image.
  • the method of judging that there is no object motion in a certain period of time can be realized by various motion detection methods in computer image analysis, such as frame difference method, optical flow method and the like.
  • step S31-22 the structure of the target subject is extracted from the source image, and the lines in the structure are simplified to form an automatically generated structure template.
  • the automatically generated structural template can adopt a common shape analysis algorithm to extract the skeleton of the connected region, remove the short length of the skeleton, and simplify the skeleton line into key points by an approximate method. Fewer fold lines.
  • the simplified estimated lines can be converted into different types of structural templates.
  • the implementation may be: by selecting the intersection of the skeleton lines, using this as a handle node, or selecting a center intersection, This is the root node. For other intersections, a tree generated by the root node can be created as a skeleton.
  • step S31-23 the connected pixel group is extracted from the source image to generate an animation object.
  • various image segmentation algorithms may be used to extract the maximum connected domain pixels in the image that meet certain conditions (eg, the pixel color is uniform and located near the center of the source image).
  • the background subtraction may be performed based on the source image in the before and after frames in the acquisition frame sequence to extract the foreground, and the foreground pixel is used as the animation object.
  • step S31-22 and step S31-23 can change the order relationship, that is, the method can first extract the animation object and then automatically generate the structure template; and can also automatically generate the structure template and then extract the animation object, which Both methods are within the scope of protection of this embodiment.
  • FIG. 6 is a flow diagram 600 of a third embodiment of acquiring an animated object (ie, step S31 in FIG. 3).
  • the structural template is a manually generated structural template.
  • step S31-31 a source image including an image of the target subject is photographed in response to the photographing request.
  • step S31-32 the position of a key point (such as a handle node or a bone root node, a child node, etc.) is input in the image of the target subject by means of human-computer interaction (for example, clicking on the touch screen), and the key point is Connected to form a manually generated structural template.
  • a key point such as a handle node or a bone root node, a child node, etc.
  • the method of extracting the object by the device may be fully automatic or semi-automatic.
  • the semi-automatic method is based on a fully automatic basis and is extracted by the user's guidance device.
  • the user may specify a part of the outline outlined by the red watercolor pen as the object, and the device may use the pattern recognition method to classify the color of the outline, and filter out the area with the red outline in the automatically extracted pixel group, and select only these.
  • the pixels in the area are used as objects.
  • the above three cases may be combined to different degrees.
  • the following is an example of an implementation in which a preset structure template is combined with an automatically generated structure template.
  • the topology of the predefined structural templates such as quadruped, erect humans, etc.
  • the length of the torso and limb of the animated object is achieved by automatic generation. Specifically, after the topological skeleton of the object is extracted, the length of the skeleton line of the corresponding part is calculated, so that the predefined skeleton is better adapted to the object.
  • steps S31-33 the connected pixel group is extracted from the source image to generate an animation object.
  • step S31-32 and the step S31-33 can change the order relationship, that is, the method can first extract the animation object, and then form the manually generated structure template; or form the manually generated structure template, and then extract the animation.
  • the method can first extract the animation object, and then form the manually generated structure template; or form the manually generated structure template, and then extract the animation.
  • Objects both of which are within the scope of protection of this embodiment.
  • step S32 the animated object is bound to the corresponding structural template. This step is described in detail below by two embodiments.
  • FIG. 7 is a flow diagram 700 of the first embodiment of FIG. 3 for binding an animated object to a corresponding structural template (ie, step S32 in FIG. 3).
  • the structure template is composed of handle nodes.
  • step S32-11 the animated object is meshed.
  • the animated object is meshed, wherein the mesh shape used may be a triangle, a quadrangle or other irregular patterns.
  • the mesh shape used may be a triangle, a quadrangle or other irregular patterns.
  • any common meshing algorithm can be used to implement the meshing process, which is not limited in this respect.
  • step S32-12 a grid point close to the handle node in the structure template is selected in the grid, and the grid point is used as a constraint point of the grid deformation, and the animation object is bound to the corresponding structure template.
  • the structural template is composed of handle nodes.
  • handle nodes are set in the structure template, and each handle node controls one component in the structure template.
  • the function of the handle node is similar to the joint of an animal.
  • the handle node of the knee joint in the human structural template can control the movement of the leg in the skeleton.
  • Selecting a grid point in the grid that is close to the handle node in the structure template can be implemented by calculating the Euler distance.
  • the animated object can be aligned with the structural template before the animated object is bound to the structural template.
  • the structural template is automatically generated by the device or manually created by the user, the key points in the structural template are originally generated based on the current animated object, so their positions are already accurately aligned and no alignment is required. If the structural template is fully predefined, you need to move the structural template to the preset position in the animated object.
  • the processing can be divided into the following two cases:
  • the source image is taken by the user under the preset object area indication
  • the predefined structure template in the database is already aligned with the preset object area
  • the animated object and the preset The object area is consistent, which is equivalent to the manual alignment of the structural template and the animated object.
  • the structural template does not need to be aligned.
  • the device When the source image is not acquired under the indication of the object area, the device needs to separately calculate the coordinates, size and contour axis inclination of the animated object area and the object, and then align the required displacement, scale and angle in the coordinate system according to the calculation result. Move the object area to the position that matches the animated object by panning, zooming, rotating, etc., so that the structural template is aligned with the object.
  • Figure 8 is a flow diagram of a second embodiment of binding an animated object to a corresponding structural template (i.e., step S32 in Figure 3).
  • the structural template is composed of a skeleton tree.
  • step S32-21 the animated object is meshed.
  • step S32-22 the animation object is bound to the corresponding structure template by using the method of skinning the skeleton tree.
  • the animation object may be bound to the structural model by other binding methods, and may be personalized according to the type of the structural template.
  • step S33 the structure template is actuated to drive the animated object bound to the structure template to perform a corresponding action.
  • the structure template can perform corresponding actions based on the preset actions.
  • the structure template can be correspondingly acted upon based on preset motion rules.
  • the handle node in the structure template or the node in the skeleton tree can be dragged through the human-computer interaction input mode to perform corresponding actions. The above will be explained one by one below.
  • the preset action may be one, a series, one set or multiple sets of designed actions.
  • One, a series, one or more sets of actions can be stored for each preset structural template.
  • the selection of the animation action is done manually by the user or by random assignment of the device.
  • the action can show the motion and deformation of the structural template. Specifically, it can be represented by a displacement of a handle node in a structure template in a key frame, or a movement of a bone in a key frame.
  • the preset action may be that the developer pre-records the change data of the handle node or the skeleton tree in each key frame. Animation number According to the motion vector of each handle node in each key frame, or the displacement rotation of the bone in each key frame.
  • the developer can draw a standard image of the animated object, bind the structural template on the standard image, drag the key points in the structural template, and drive the standard character motion to preview the animation.
  • the preset action may include an action for expressing an emotion type of an object to be animated, or an action for expressing a motion type of the object to be animated.
  • the implementation of the action by the action of the skeleton model to perform the corresponding action may be as follows:
  • a mesh or triangle is created based on the shape of the object, and the mesh or triangle is subject to handle nodes in the structure template, resulting in movement and deformation.
  • the object is attached to the grid or triangle as a texture, and the object is subject to the handle node in the structure template to form an action.
  • the determination method of the vertex coordinates of the grid or the triangle piece can be realized by dividing the space into a plurality of regions, setting the coordinates of a certain point in advance, and determining other coordinates by means of functions, calculus, matrix, and the like.
  • the location can be adjusted several times to achieve the best results.
  • Developers can design several limb actions for each object.
  • the device directly applies one or a combination of a plurality of appropriate limb movements according to the emotion type of the object, so that the skeleton structure drives the object to move and form an animation.
  • Developers can also implement personalized actions through various human-computer interaction methods without using predefined actions.
  • the action may be composed of a sequence of key frames storing the position of the handle node, and the position of the handle node pre-stored in the computing device and the displacement in each key frame may be freely designed by the developer.
  • Animation data produced by developers can be previewed and modified by binding to standard images.
  • the displayed image may not be a standard image, but an image drawn by the user, and may of course be an image obtained from other channels, such as an image downloaded from a network.
  • the device maps the motion of the handle node to the object's mesh or triangle node, so that the structure template can drive the animation object according to the preset action. motion.
  • Structure templates should be designed with as few handle nodes as possible, and make sentences The initial positional spacing of the shank nodes is as large as possible, so that the motion of the object has greater freedom and avoids conflicts between the limb components.
  • the way to automate animations can be self-moving by handle nodes or bones according to certain rules.
  • gravity motion is a kind of motion rule.
  • the corresponding automatic animation method can be: the handle node or the bone has a certain quality. Under the simulated real gravity field, the handle node or the bone hangs down and falls to form an action.
  • the implementation method of the automatic rule-driven node motion can be implemented by a physical engine device that is common in the market.
  • the user can use the input mode of human-computer interaction, for example, dragging the handle node or the bone node with a mouse or a touch to generate an action desired by the user.
  • the user can combine the way the animation is automatically created with the way the animation is made manually.
  • the nodes of the structural template will always be subjected to the gravity field, and the user can exert an external force on the node in an interactive manner.
  • the device can make an animation according to the effect produced by the simulation automatic and manual force superposition.
  • the device applies the animation data to the structural template to drive the animation object to move.
  • the implementation of the animation object driven by the structural template is also different.
  • the first implementation may be that if the structural template is implemented by the handle node, the motion vector of the handle node is directly passed to the limit point bound to the node, so that the restriction point is displaced.
  • the device recalculates the position of each vertex according to the change of the position of the constraint point, thereby generating a deformed mesh.
  • the algorithm for calculating the position of the constrained vertex can be implemented by any reasonable method, and there is no limitation in this respect.
  • the device then maps the pixel group of the animated object as a texture to the changed mesh, and completes the motion change of the animated object in this frame.
  • the second implementation may be: if the structural template is implemented by a bone tree, the movement of the bone joint causes the entire bone to move, and the bone drives the skin mesh bound thereto to be displaced and deformation.
  • the specific implementation method can be any common general algorithm, and there is no limitation in this aspect.
  • the device can automatically loop through the animation until the user reselects another set of animation actions, or the user re-acquisitions another animation object and then automatically loops the animation.
  • the preset structure template drives the object to be animated to perform corresponding actions.
  • the child selects the object type by hand on the tablet - the puppy, and binds the picture of the puppy to the structural template of the puppy;
  • the child selects the type of emotion by touching it on the tablet - frustration, the picture of the tablet will appear an annoyed animation of the puppy: the puppy bows his head and squats in the field.
  • the child selects the outline of the object by hand on the tablet - a circle, and binds the picture of the small ball to the circular structure template;
  • the child selects the type of action by hand on the tablet - the beating, the animation of the small ball will appear on the tablet screen: the small ball jumps into the air, then Fall on the ground and bounce up, then jump into the air.
  • the user may be other users than children, for example, young people, middle-aged people, and elderly people.
  • the animation is simple, convenient, and fun, this method will be more attractive to children who like to explore and love to create.
  • These children don't need to master the animation principles that a professional animator must master. They can easily create their favorite animations with a simple selection.
  • different types of structure templates may be stored in the preset animation material database by the type of the animation object.
  • Object types can include people, animals, plants, and items. Specific object types can be old people, children, young men, young women, cats, dogs, fish, flowers, birds and grasses. In addition, the object type can also be personalized according to the needs of animation production, this content is not limited.
  • the preset animation material database may store the structure template of the corresponding contour shape according to different contour shapes of the animation object.
  • the object contour shape may be a circle, a square, a rectangle, a star, a ring, or the like. It can be understood that the contour shape of the object can be not only a planar shape but also a three-dimensional shape, and the shape of the specific contour can be personalized according to the needs of the animation, and the content is not limited in this respect.
  • the types of emotions may also include: happiness, disappointment, silence, anger, grief, grief, sorrow, anger, boredom, fear, horror, respect, caress, compassion, greed, ashamedy One or more of arrogance, ashamedy, and shame.
  • the emotion type of the object can be represented by the action of the object, and the user selects the emotion type of the object to call the corresponding action data. For example, if the animated object is a dog and the user chooses to be happy, the dog will wag the tail in place.
  • the object type and the emotion type can also be automatically selected by the device according to the characteristics of the object, such as the height of the object, the number of legs, or the brightness and color of the body of the object.
  • the automatic selection here is particularly suitable for simple situations, for example, simply dividing the object into two types: an upright walking animal and a crawling quadruped.
  • the number of legs is judged simply by the number of forks under the overall shape, and the height of the body is used to determine whether it is an upright state or a crawling state, and then simply gives an upright walking or crawling action.
  • the user does not need to master the knowledge of the professional animation principle, and can easily create an animation that he likes and has professional effects by simple operation, and the operation is simple and convenient. It is lively and interesting and has a wide range of applications.
  • the embodiment of the present invention captures a self-created picture, and extracts an animated object in the image for making an animation, which is simple and convenient, and has a strong interest.
  • the combination of painting and animation is particularly suitable for children.
  • FIG. 9 is a functional structural diagram 900 of an apparatus for making an animation according to an embodiment of the present invention.
  • the apparatus for making an animation may include: an animation object acquisition unit, an animation object binding unit, and an animation production unit. among them:
  • the animated object acquisition unit can be used to acquire an animated object.
  • An animated object binding unit can be used to bind an animated object to a corresponding structural template.
  • the animation unit can make the structure template action to drive the animation object bound to the structure template to perform corresponding actions.
  • FIG. 10 is a schematic diagram 1000 of a functional structure of a first embodiment of an animation object acquisition unit according to the present invention.
  • the animation object acquisition unit may include: a first structure template activation module, a first image capture module, and a first animation object extraction module. among them:
  • the first structural template activation module can be configured to activate the preset structural template in response to the photographing request.
  • the first image capturing module may be configured to capture a source image including an image of the target subject after the image of the target subject is substantially matched with the preset structure template.
  • the first animated object extraction module may be configured to extract a connected pixel group in the preset structure template region from the source image to generate an animated object.
  • FIG. 11 is a schematic diagram 1100 of a functional structure of a second embodiment of an animation object acquisition unit according to the present invention.
  • the animated object acquiring unit may include: a second image capturing module, a second structural template generating module, and a second animated object extracting module. among them:
  • the second image capturing module may be used to capture a source image including the target subject.
  • the second structural template generating module can be used to extract the structure of the target subject from the source image, and simplify the processing of the lines in the structure to form an automatically generated structural template.
  • the second animated object extraction module may be configured to extract connected pixel groups from the source image to generate an animated object.
  • FIG. 12 is a schematic diagram 1200 of a functional structure of a third embodiment of an animation object acquisition unit according to the present invention.
  • the animated object acquiring unit may include: a third image capturing module, a third structural template generating module, and a third animated object extracting module. among them:
  • the third image capturing module may be configured to capture the target subject including the target subject in response to the photographing request
  • the source image of the image may be configured to capture the target subject including the target subject in response to the photographing request
  • the source image of the image may be configured to capture the target subject including the target subject in response to the photographing request
  • the source image of the image may be configured to input the position of the key point in the image of the target subject by means of human-computer interaction, and connect the key points to form a manually generated structure template.
  • the third animated object extraction module may be configured to extract connected pixel groups from the source image to generate an animated object.
  • first image capturing module the second image capturing module, the third image capturing module, and other similar modules may be implemented by using the same hardware according to actual needs, or may be implemented by using different hardware. Make restrictions.
  • FIG. 13 is a schematic diagram 1300 of a functional structure of a first embodiment of a structural template binding unit according to the present invention.
  • the structure template is composed of a handle node.
  • the structure template binding unit may include: a first meshing processing module and a first animation object binding module. among them:
  • the first meshing processing module can be used to mesh the animated objects.
  • the first animated object binding module can be used to select a grid point in the grid that is close to the handle node in the structure template, and use the grid point as a constraint point of the grid deformation to bind the animated object to the corresponding structure template. on.
  • FIG. 14 is a schematic diagram 1400 of a functional structure of a second embodiment of a structural template binding unit according to the present invention.
  • the structure template is composed of a skeleton tree.
  • the structure template binding unit may include: a second meshing processing module and a second animation object binding module. among them:
  • the second meshing processing module can be used to mesh the animated objects.
  • the second animated object binding module can be used to bind the animated object to the corresponding structural template by using a method of skinning the skeleton tree.
  • the animation unit may be operated in one or more of the following ways: the structure template is actuated based on the preset action; the structure template is actuated based on the preset motion rule; By dragging the handle node in the structure template or the node in the skeleton tree to make the structure template act.
  • the animated object may include: a target object within one or more scenes, a target image drawn on the drawing plane, and a target item placed on the drawing plane.
  • the drawing plane includes: drawing a drawing card with a preset background, or drawing a card with a solid color as a background.
  • the device for making an animation may further include: an animation object recording unit, Animation file generation unit and animation file display/storage unit. among them:
  • the animated object recording unit can be used to record an animation object to perform an action frame by frame.
  • the animation file generating unit may be configured to generate a movie file according to the recorded screen, or generate a movie file according to the recorded screen and configure a background and/or an audio file for the animation file.
  • the animation file display/storage unit can be used to display and/or store animation files.
  • the device for creating an animation in each of the above embodiments may be an execution body in the method of creating an animation, and each function module in the device for making an animation is a corresponding flow for implementing each method.
  • a related function module can be implemented by a hardware processor.
  • Each function module only needs to implement its own functions, and its specific connection relationship is not limited. Since the device for making an animation of the above embodiment corresponds to the content of the method for creating an animation, those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific workflow of each functional unit described above can be referred to. Corresponding processes in the foregoing method embodiments are not described herein again.
  • Figure 15 is a schematic view of a frame structure 1500 of a first embodiment of an apparatus for making animations according to the present invention.
  • the device for making an animation may include: a memory, a processor, and a display. among them:
  • the memory can be used to store material data and programs.
  • the processor can be used to execute a program stored in the memory, the program causing the processor to perform the following operations: acquiring an animated object; binding the animated object to the corresponding structural template; and causing the structural template to act to drive the animation bound to the structural template The object does the corresponding action.
  • the display can be used to display animated objects for the corresponding series of actions.
  • Figure 16 is a schematic view of a frame structure 1600 of a second embodiment of an apparatus for making animations according to the present invention.
  • the apparatus may include a central processing unit (CPU) that can execute various types according to a program stored in a read only memory (ROM) or a program loaded from a storage portion into a random access memory (RAM). Proper action and handling. In the RAM, various programs and data required for device operation are also stored.
  • the CPU, ROM, and RAM are connected to each other through a communication bus.
  • An input/output (I/O) interface is also connected to the bus.
  • the following components are connected to the I/O interface: an input portion including a keyboard, a mouse, and the like; an output portion including a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a speaker; a storage portion including a hard disk or the like; Network connection of LAN card, modem, etc.
  • the communication section performs communication processing via a network such as the Internet.
  • the drive is also connected to the I/O interface as needed.
  • a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive as needed so that a computer program read therefrom is installed into the storage portion as needed.
  • the functional blocks shown in the block diagrams described above may be implemented as hardware, software, firmware, or a combination thereof.
  • hardware When implemented in hardware, it can be, for example, an electronic circuit, an application specific integrated circuit (ASIC), suitable firmware, plug-ins, function cards, and the like.
  • ASIC application specific integrated circuit
  • the elements of the present invention are programs or code segments that are used to perform the required tasks.
  • the program or code segments can be stored in a machine readable medium or transmitted over a transmission medium or communication link through a data signal carried in the carrier.
  • a "machine-readable medium” can include any medium that can store or transfer information.
  • machine-readable media examples include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio frequency (RF) links, and the like.
  • the code segments can be downloaded via a computer network such as the Internet, an intranet, and the like.
  • each functional unit or module in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种制作动画的方法和装置。其中,该方法包括:获取动画对象;将动画对象绑定至对应的结构模板上;使结构模板动作,以驱动绑定在结构模板上的动画对象做相应的动作。通过本发明实施例,用户无需掌握专业动画原理知识,仅通过简单的操作就可以轻松创作出自己喜欢且具有专业效果的动画,且操作简单方便,生动有趣、适用范围广。

Description

制作动画的方法和装置 技术领域
本发明属于动画技术领域,尤其涉及一种制作动画的方法和装置。
背景技术
随着娱乐文化的高速发展,人们对动画的需求日益增多。通常,专业的动画师制作动画包括如下两种方式:
第一种方式:逐帧动画,也称为定格动画。图1所示的是制作逐帧动画的流程。其中,S11:专业的动画师制作动画所需的每一帧画面。S12:用相机拍摄制作的每一幅画面生成对应的多张图像。S13:将每一张图像串联生成动画。用此种方式制作动画,需要专业的动画师将动画中的每一帧图像都制作出来,工作量巨大、重复工作多,制作工作枯燥费时。
第二种方式:关键帧动画。图2所示是制作关键帧动画的流程。其中,S21:专业的动画师制作动画所需的关键帧图像。S22:利用计算机生成关键帧画面之间的过渡帧图像。S23:将关键帧图像和过渡帧图像串联生成动画。此种方式,仅需要专业的动画师制作动画所需的关键帧图像,制作图像的工作量比第一种方式少了许多。然而,动画师需要深刻理解各个关键帧图像之间的运动规律,才可以利用计算机生成关键帧之间的过渡帧的图像。此种方式专业性非常强,不适合普通用户制作该种动画。
因此,现有的动画制作的方法需要涉及专业美工和专业计算机科学技术,普通用户只能被动接受专业的动画师所制作的动画,无法自己根据自己的意愿制作相应的动画。
发明内容
鉴于以上所述一个或多个问题,本发明实施例提供了一种制作动画的方法和装置。
一方面,本发明实施例提供了一种制作动画的方法,该方法包括:
获取动画对象;
将动画对象绑定至对应的结构模板上;
使结构模板动作,以驱动绑定在结构模板上的动画对象做相应的动作。
另一方面,本发明实施例提供了一种制作动画的装置,该装置包括:
动画对象获取单元,用于获取动画对象;
动画对象绑定单元,用于将所述动画对象绑定至对应的结构模板上;
动画制作单元,用于使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作。
又一方面,本发明实施例提供了一种制作动画的装置,该装置包括:
存储器,用于存放素材数据和程序;
处理器,用于执行所述存储器存储的程序,所述程序使得所述处理器执行以下操作:
获取动画对象;
将动画对象绑定至对应的结构模板上;
使所述结构模板动作,以驱动绑定在结构模板上的所述动画对象做相应的动作;
显示器,用于显示动画对象做相应的动作。
通过本发明实施例,用户无需掌握专业动画原理知识,仅通过简单的操作就可以轻松创作出自己喜欢且具有专业效果的动画,且操作简单方便,生动有趣、适用范围广。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是现有技术中的制作动画的方法的第一实施流程示意图;
图2是现有技术中的制作动画的方法的第二实施流程示意图;
图3是根据本发明一实施例的制作动画的方法的流程示意图;
图4是图3中获取动画对象的第一实施例的流程示意图;
图5是图3中获取动画对象的第二实施例的流程示意图;
图6是图3中获取动画对象的第三实施例的流程示意图;
图7是图3中将动画对象绑定至对应的结构模板上的第一实施例的流程示意图;
图8是图3中将动画对象绑定至对应的结构模板上的第二实施例的流程示意图;
图9是本发明一实施例的制作动画的装置的功能结构示意图;
图10为本发明动画对象获取单元的第一实施例的功能结构示意图;
图11为本发明动画对象获取单元的第二实施例的功能结构示意图;
图12为本发明动画对象获取单元的第三实施例的功能结构示意图;
图13为本发明结构模板绑定单元的第一实施例的功能结构示意图;
图14为本发明结构模板绑定单元的第二实施例的功能结构示意图;
图15为本发明制作动画的装置的第一实施例的框架结构示意图;
图16为本发明制作动画的装置的第二实施例的框架结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面将详细描述本发明的各个方面的特征和示例性实施例。在下面的详细描述中,提出了许多具体细节,以便提供对本发明的全面理解。但是,对于本领域技术人员来说很明显的是,本发明可以在不需要这些具体细节中的一些细节的情况下实施。下面对实施例的描述仅仅是为了通过示出本发明的示例来提供对本发明的更好的理解。本发明决不限于下面所提出的任何具体配置和算法,而是在不脱离本发明的精神的前提下覆盖了元素、 部件和算法的任何修改、替换和改进。在附图和下面的描述中,没有示出公知的结构和技术,以便避免对本发明造成不必要的模糊。
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的实施方式;相反,提供这些实施方式使得本发明更全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本发明的实施例的充分理解。然而,在一些情况下,本发明实施例没有详细示出或描述公知结构、材料或者操作以避免模糊本发明的主要技术创意。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图3是根据本发明一实施例的制作动画的方法的流程示意图300。
在步骤S31中:获取动画对象。
其中,动画对象可以是一个或者多个场景内的对象,例如,动画对象可以是马路上的行人,鱼缸里的小鱼,天空的云彩等。动画对象还可以是绘制在绘图平面上的图像或者是摆放在绘图平面上的物品。该绘图平面可以是具有预设背景的绘图卡片,或者是以纯色为背景的绘图卡片。例如,绘图平面可以是纸、画布、桌面等物理平面。在实际运用中,用户可以用笔绘制,用材质剪贴,用材料塑造或用实物拼摆等方式创作出动画对象的外形。这里的动画对象包括但不限于人物、动物、各种来源于自然界的生物、人为构思出的生命体,以及不具有生命但可以人为赋予动作的物体或形象,此方面内容不作限制。
由此,本发明实施例可以通过在预设的卡片上绘画,或者在预设的卡片上直接摆放物品形成图画,不仅简单方便,而且有绘图平面做背景,画面会显得生动、活泼,增强了等用户(特别是儿童)的趣味性。
另外,本发明实施例可以通过在预设背景的绘图卡片上绘画,增强了制作动画的专业性。此外,本发明实施例还可以在纯色为背景的绘图卡片上绘制,减少了对动画材料的制作需求,降低了制作动画的成本。
在本实施例中,利用摄像头摄取图像,可以是用户手持带有摄像头的设备主动拍摄,或用固定的便携设备进行自动拍摄,此方面不做限制。
在本实施例中,制作动画的方法还包括:逐帧录制动画对象做系列动作的画面;根据录制的画面生成动画文件,或者根据录制的画面生成动画文件并为动画文件配置背景和/或音频文件;显示和/或存储动画文件。
由此,本实施例可以通过逐帧录制动画对象生成动画文件,并即时显示和/或存储动画文件,可以方便用户预览或者重复观看生成的动画文件。
另外,本实施例可以通过为运动画面配置背景和/或音频文件,使得简单的动画制作就可以获取专业的效果,增强了用户的满意度。
其中,配置的背景可以为预设卡片上的背景。由此,本实施例可以通过将预设卡片上的背景作为动画背景,使得动画与背景协调一致,进一步增加了动画的专业效果。
在本实施例中,所获取的动画对象会绑定至与其对应的结构模板上。具体的,结构模板可以是预设的结构模板、自动生成的结构模板和手工生成的结构模板,也可以将这三种模板进行组合使用。因为根据不同的结构模板的类型,获取动画对象的实现方式可以不同,所以下面将通过图4、图5和图6的实施例来分别进行说明。
图4是获取动画对象(即图3中步骤S31)第一实施例的流程示意图400。在本实施例中,结构模板是预设的结构模板。
在步骤S31-11中:响应于拍摄请求,启动预设的结构模板。
在本实施例中,根据动画对象的不同类型,预设的结构模板可以展现动画对象的框架结构、轮廓结构或者骨骼结构。动画对象类型可以包括人物、动物、植物及物品等类型。具体的,动画对象类型可以是老人、小孩、男青年、女青年、猫、狗、鱼、花、鸟和草等类型。具体动画对象类型可以根据动画制作的需要进行个性化设置,此方面内容不做限制。
在本实施方式中,可以预先设置动画素材数据库,该数据库可以按动画对象的类型存储有不同类型的结构模板。
在本实施方式中,当预设的结构模板均与动画对象不匹配时,可以基于待制作动画对象绘制与该动画对象相匹配的结构模板,将与该动画对象 相匹配的结构模板对应的存储至动画素材数据库内。
由此,本发明实施方式通过预设动画素材库,不断向该素材库添加素材,使得随着时间的推移,该素材库里面的素材会变得越来越丰富,用户可以用于选择的素材也会越来越多,动画效果也会越来越好,专业性越来越强。
在S31-12中:在将目标拍摄物的图像与预设的结构模板基本匹配之后,拍摄包括目标拍摄物的图像的源图像。
在本实施方式中,拍摄请求可以由用户向装置发出的拍摄请求。具体的,用户可以用人机交互的方式选择拍摄图标来实现。人机交互的方式可以包括但不限于:触摸屏点选、鼠标单击、键盘键入、在触摸屏上点选,或是使用文字、符号、颜色等表示方法在绘图平面上或空间中表示,再经摄像头录入等方式。
在本实施例中,用户可以手持装置,在预设的结构模板的引导下,将动画对象与预设的结构模板基本贴合之后,拍摄包括目标拍摄物的图像的源图像。摄像头可以采集源图像的前、后帧多幅图,这些源图像可以用于后续步骤中的图像提取工作。
在本实施例中,预设的结构模板可以是从现有的数据库中获取的,也可以由软件系统的开发者(也可以是用户)在制作动画之前自行设计。具体设计结构模版的实现方式可以是:
通过一组用来描述对象结构的数据来构成结构模板。具体的,开发者可以先构思出结构模版的大致属性和对应动画对象的外貌,从中抽离出能带动结构模板运动的句柄节点或骨骼树。开发者在设计结构模版的同时,还可以设计出动画对象的大致轮廓,以该轮廓作为对象的预设区域,以方便使用者在拍摄时将目标拍摄物的图像与结构模板对齐,以方便后续的动画对象提取操作。在结构模板所存放的数据库中,句柄节点通常是以二维坐标表示,骨骼树则是一个由骨骼数据(包括骨骼长度、父骨骼信息和限制信息等)组成的树状结构数据。预设区域则可以用轮廓向量表示,例如,要定义狗的结构模版,首先,可以设定六个关键点,分别对应四肢和头、尾,还可以设定根节点为腰,腰节点包含脊椎和尾椎两根骨骼,脊椎包含 两个前肢骨骼和颈部骨骼,尾椎包含两个后肢和尾部骨骼,由这些骨骼,组成骨骼树。接着,再勾勒出一个用四条腿站立的狗的大致轮廓,作为预设区域。
在本实施例中,预设的结构模板可以由开发者先行设计并存储在制作动画的装置(下面简称装置)的数据库中。例如,预设的结构模板可以存储在平板电脑或者手机的APP应用中,其可由用户手动或由装置自动选择。用户手动选择的方式包括各种使用人机交互的方式向装置输入,再由装置获取数据库中存储的结构模版。而装置自动选择的方式则要开发者在数据库中存储与结构模版相应的启用条件。装置可以用图像分析的手段,提取动画对象的形状特征,例如突出的部分的位置、数量和轮廓整体的面积、高度来对动画对象的轮廓进行分类,从而选择采用哪种模版。例如,装置可以根据动画对象的整体轮廓形状下方的突出部分的数量判断腿的数量,同时用形体的高度辅助判断其是直立状态或是爬行状态,从而确定此动画对象应该使用直立的人的结构模版或是四足爬行动物的结构模版。
在本实施例中,启动预设的结构模板可以引导用户在拍摄源图像时,在将目标拍摄物的图像与预设的结构模板基本贴合之后再进行拍摄。如此设计,可以方便后期将动画对象直接绑定至结构模板上,从而减少将动画对象与结构模板对齐的步骤,减少了数据运算处理量,提高了将动画对象绑定至结构模板的成功率,进而可以提高制作动画的效果和质量。
在本实施例中,具体的拍摄设备或者装置可以是任意包含或可连接摄像头的计算设备,包括但不限于计算机、手机、平板电脑、各种手持计算设备和固定式体验装置等。装置或者装置还可以带有显示设备,如显示屏、投影仪等,用于预览和最终呈现动画作品。
在步骤S31-13中:由源图像中提取处于预设的结构模板区域内的连通的像素群,生成动画对象。
在本实施方式中,因为动画对象是由一个或多个连通区域内包含的所有像素所组成,所以可以通过图像分析的方法提取源图像中的动画对象。具体的,可以对预设区域内(例如,预设的结构模板所处的区域内)的像素进行轮廓查找和形态学处理,提取出连通的像素群作为动画对象。如果 预设区域内没有符合条件的像素群,则动画对象提取失败。此时,可以重新拍摄源图像,重新从源图像中提取动画对象。
图5是获取动画对象(即图3中步骤S31)的第二实施例的流程示意图500。在本实施例中,结构模板是自动生成的结构模板。
在步骤S31-21中:摄取包括目标拍摄物的源图像。
在本实施例中,装置可以固定在桌面或地面,使设备中的摄像头拍摄到绘图平面或场景,装置自动采集并分析每一帧图像。例如,当发现绘图平面中或场景中,某段时间内没有物体运动时,即可以判定用户绘制或摆放的动作完成,装置选取这一刻的图像作为源图像。其中,判断某段时间内没有物体运动的方法可由计算机图像分析中的各种运动检测方法实现,如帧差法、光流法等。
在步骤S31-22中:由源图像中提取目标拍摄物的结构,并对结构中的线条进行简化处理,形成自动生成的结构模板。
在本实施例中,自动生成的结构模版可采用常见的形状分析算法,提取连通区域的骨架,对骨架中长度过短的线条进行剔除,并对骨架线条用近似的方法将其简化成关键点较少的折线。依据结构模版的实现方式不同,可以把简化后的估计线条转换为不同类型的结构模版,该实现方式例如可以是:通过选取骨架线条的交点,将此作为句柄节点,或选取一个中心交点,将此作为根节点。对于其他交点,可建立由根节点生出的树,将其作为骨骼等。
在步骤S31-23中:由源图像中提取连通的像素群,生成动画对象。
在本实施例中,全自动的对象提取有多种方法,具体可以使用各种图像分割算法提取出图像中符合一定条件(如像素颜色均匀并位于源图像中心附近)的最大连通域像素。对于手动捕捉的源图像,可以基于源图像在采集帧序列中的前后帧进行背景减除从而提取前景,把前景像素作为动画对象。
需要说明的是,步骤S31-22和步骤S31-23可以调换顺序关系,即:本方法可以先提取动画对象,再自动生成的结构模板;还可以先自动生成结构模板,再提取动画对象,这两个方式均在本实施例的保护范围之内。
图6是获取动画对象(即图3中步骤S31)的第三实施例的流程示意图600。在本实施例中,结构模板是手工生成的结构模板。
在步骤S31-31中:响应于拍摄请求,拍摄包括目标拍摄物的图像的源图像。
该步骤的实现方式可以参考上面的方式,此方面内容不再赘述。
在步骤S31-32中:通过人机交互的方式(例如在触摸屏上点选)在目标拍摄物的图像中输入关键点(如句柄节点或骨骼根节点、子节点等)的位置,将关键点连通,形成手工生成的结构模板。
在本实施例中,如果结构模版由用户手动生成或装置自动生成,装置提取对象的方法则可以是全自动或是半自动的。而半自动的方法则是在全自动的基础上,由用户指导装置进行提取。例如,可规定用户用红色水彩笔勾勒的轮廓之内部分作为对象,则装置可利用模式识别的方法对轮廓的颜色分类,筛选出自动提取的像素群中带有红色轮廓的区域,仅选取这些区域内的像素作为对象。
在一些可选的实施例中,可以将上述三种情况(预设的结构模板、自动生成的结构模板和手工生成的结构模板)进行不同程度结合。下面举例说明预设的结构模板与自动生成的结构模板相结合的实现方式。
首先,预定义结构模版的拓扑结构,如四足动物、直立的人类等。然后,动画对象的躯干和肢体的长度通过自动生成的方式实现。具体的,在提取了对象的拓扑骨架后,再计算相应部分的骨架线条长度,从而使预定义骨架更好的适应对象。
在步骤S31-33中:由源图像中提取连通的像素群,生成动画对象。
该步骤的实现方式可以参考上面的方式,此方面内容不再赘述。
需要说明的是,步骤S31-32和步骤S31-33可以调换顺序关系,即:本方法可以先提取动画对象,再形成手动生成的结构模板;还可以先形成手动生成的结构模板,再提取动画对象,这两个方式均在本实施例的保护范围之内。
再次参考图3,在步骤S32中:将动画对象绑定至对应的结构模板上。该步骤下面由两个实施例进行详细说明。
图7是图3中将动画对象绑定至对应的结构模板上(即图3中步骤S32)的第一实施例的流程示意图700。在本实施例中,结构模板由句柄节点构成。
在步骤S32-11中:将动画对象进行网格化处理。
在本实施例中,将动画对象进行网格化处理,其中,采用的网格形状可以是三角形、四边形或其他不规则图形。具体可以采用任何常见的网格化算法来实现该网格化处理,此方面不做限制。
在步骤S32-12中:在网格中选取与结构模板中的句柄节点接近的网格点,利用网格点作为网格形变的约束点,将动画对象绑定至对应的结构模板上。
在本实施例中,结构模版由句柄节点组成。通常,结构模板中会设置几个句柄节点,每个句柄节点分别控制结构模板中的一个部件。句柄节点的功能类似于动物的关节,例如,人的结构模板中膝关节部位的句柄节点就可以控制骨架中腿部的运动。在网格中选取与结构模板中的句柄节点接近的网格点,可以通过计算欧拉距离的方法实现。在添加网格点为网格形变的约束点时,如果某一句柄节点在一定距离之内没有网格点,则动画对象与结构模版绑定失败。在此种情况下,可以重新选取网格点,重新将动画对象绑定至对应的结构模板上。而绑定成功的网格点会跟随句柄节点的运动而运动,其具体实现方式可以通过网格点完全复制句柄节点运动矢量的方式来实现。
在一些可选的实施例中,在动画对象与结构模板绑定之前可以将动画对象与结构模板进行对齐。
如果结构模版是由装置自动生成的或用户手动制作的,则结构模版中的关键点本来就是基于当前动画对象生成的,因此它们的位置已经是准确对齐的,无需再进行对齐操作。如果结构模版是完全预定义的,则需要先把结构模版移动到动画对象中的预设位置上,该处理可以分为以下两种情况:
当源图像是用户在预设的对象区域指示下拍摄时,由于数据库中的预定义的结构模版与预设的对象区域已经是对齐的,并且动画对象与预设的 对象区域是吻合的,相当于结构模版和动画对象已完成人工对齐,此种情况下,结构模版无需再作对齐。
当源图像并非是在对象区域指示下获取时,则装置需要分别计算动画对象区域和对象的坐标、尺寸和轮廓主轴倾角,再根据计算结果在坐标系中对齐所需的位移、比例和夹角,把对象区域通过平移、缩放、旋转等手段移动到与动画对象吻合的位置,从而使结构模版与对象对齐。
图8是将动画对象绑定至对应的结构模板上(即图3中步骤S32)的第二实施例的流程示意图。在本实施例中,结构模板由骨骼树构成。
在步骤S32-21中:将动画对象进行网格化处理。
需要说明的是,“将动画对象进行网格化处理”的操作也出现在了其它的步骤中,对于这种相同或者相似的内容,它们可以采取相同的实现方式,也可以采用不同的实现方式,此方面内容不做限制。
在步骤S32-22中:利用对骨骼树进行蒙皮的方法,将动画对象绑定至对应的结构模板上。
在本实施例中,还可以通过其他的绑定方式将动画对象与结构模型进行绑定,具体可以根据结构模版的类型而进行个性化设置。
再参考图3,在步骤S33中:使结构模板动作,以驱动绑定在结构模板上的所述动画对象做相应的动作。
在本实施例中,可以基于预设的动作,结构模板做相应的动作。或者,可以基于预设的运动规则,结构模板做相应的动作。再者,可以通过人机交互输入方式,拖动结构模板中的句柄节点或者骨骼树中的节点做相应的动作。下面将上述情况逐一进行说明。
首先,描述基于预设的动作系列,结构模板做系列动作的实现方式。
在本实施例中,预设的动作可以是一个、一系列、一套或者多套设计好的动作。可以针对每个预设的结构模版,存储一个、一系列、一套或者多套动作。动画动作的选取由用户手动选择或装置随机指派完成。动作可以展示结构模版的运动和形变情况。具体的,可以由结构模板中的句柄节点在关键帧中的位移,或骨骼在关键帧中的运动情况表现。预设动作可以是开发者预先记录句柄节点或骨骼树在每一关键帧中的变化数据。动画数 据以每个句柄节点在每一关键帧中的运动向量,或是骨骼在每一关键帧中的位移旋转量。具体的,开发者在设计动作时,可以绘制一张动画对象的标准图像,在标准图像上绑定结构模版,拖动结构模版中的关键点,带动标准角色运动来预览动画。预设动作可以包括:用于表现待制作动画的对象的情感类型的动作,或者用于表现待制作动画的对象的运动类型的动作。
在一些可选的实施例中,由骨骼模型的动作带动对象做相应的动作的实现方式可以如下所示:
根据对象的形状建立网格或三角片,并使网格或三角片受制于结构模板中的句柄节点,从而产生移动和变形。或者把对象作为纹理贴合于网格或三角片上,对象就受制于结构模板中的句柄节点,形成动作。
其中,网格或三角片的顶点坐标的确定方式可以通过将空间分割为多个区域,预先设置某点的坐标,再用函数、微积分、矩阵等方式确定其它坐标等方式来实现。
为了更专业的动画效果,可以先在对象的内部设置多个句柄节点,将句柄节点布置在运动关键部位(例如人体的关节处),然后根据预览的动画效果,来增加、删除或者移动句柄节点的位置,具体可以经过多次调整以达到最佳效果。
开发者可以对每个对象设计数个肢体动作。装置根据对象的情感类型直接应用一个或组合多个合适的肢体动作,使骨骼结构带动对象运动,形成动画。开发者也可以不使用预定义动作,而通过各种人机交互手段来实现个性化的动作。
具体的,动作可以由存储句柄节点位置的关键帧序列组成,计算装置中预存的句柄节点位置及每一关键帧中的位移可以由开发者自由设计。开发者制作的动画数据可通过绑定在标准图像上来预览和修改。在动画制作过程中,显示的图像可以不是标准图像,而是用户绘制的图像,当然也可以是从其它渠道获取的图像,例如从网络上下载的图像等。当用户拍摄了绘制的图画并将其绑定至结构模板后,装置会把句柄节点的运动映射到对象的网格或三角片的节点中,使得结构模板根据预设的动作就可以驱动动画对象运动。结构模板在设计时应尽量考虑使用更少的句柄节点,并使句 柄节点的初始位置间距尽量大,从而使对象的动作具有更大的自由度,避免肢体部件之间相互冲突。
接着,描述基于预设的运动规则,结构模板做动作的实现方式。
该自动制作动画的方式可以由句柄节点或骨骼按照某种规则自行运动。例如根据重力运动就是一种运动规则,其对应的自动制作动画的方式可以是:句柄节点或骨骼具有一定质量,在模拟真实的重力场下,句柄节点或骨骼自行下垂并下落形成动作。该自动规则驱动节点运动的实现方法可用市面上常见的物理引擎装置实现。
然后,描述通过人机交互输入方式,拖动结构模板中的句柄节点或者骨骼树中的节点做动作的实现方式。具体的,用户可利用人机交互的输入方式,例如用鼠标或触摸拖动句柄节点或骨骼节点,生成用户想要的动作。
在本实施例中,还可以将上述三种方式可以进行任意组合,其实现方式可以如下所示:
具体的,用户可以将自动制作动画的方式与手动制作动画的方式相结合。结构模版的节点会一直受到重力场的作用,同时用户可自行通过交互方式向节点施加外力,装置可依据模拟自动与手动的力叠加时产生的效果制作动画。
在本实施例中,装置把动画数据应用于结构模版从而带动动画对象运动。具体,根据结构模版不同,动画对象受结构模版驱动的实现方式也不同。
第一种实现方式可以是:如果结构模版由句柄节点实现,句柄节点的运动向量会直接传递至绑定到这一节点的限制点上,从而使限制点发生了位移。在当前帧中,所有约束点的位移完成后,装置根据约束点位置的变化重新计算各顶点的位置,从而生成了变形的网格。计算约束顶点位置的算法可用任何合理的方法实现,此方面不作限制。装置再把动画对象的像素群作为纹理,映射到变化后的网格上,就完成了动画对象在这一帧的运动变化。
第二种实现方式可以是:如果结构模版是由骨骼树实现的,则骨骼关节的运动会使整个骨骼运动,骨骼带动绑定在其上的蒙皮网格进行位移和 形变。具体实现方法可用任何常见的通用算法,此方面不作限制。
其他形式的结构模版则可用任何合适的算法实现。在通常的实现方法中,动画数据中只包含动作的关键帧数据,而关键帧之间其他过渡帧中,可以由数学插值计算得出。
装置可以自动循环动画,直到用户重新选择另一套动画动作,或用户重新采集另一个动画对象后重新自动循环动画。
需要说明的是,上述图3至图8的所描述的操作内容可以进行不同程度的组合应用,为了简明,不再赘述各种组合的实现方式,本领域的技术人员可以按实际需要将上述的操作步骤的顺序进行灵活调整,或者将上述步骤进行灵活组合等操作。
上文主要从装置的角度详细说明了制作动画的各种实施例,下面则主要从用户的角度详细说明如何制作动画。
首先,选择与待制作动画的对象相匹配的预设的结构模板。
接着,将待制作动画的对象绑定至预设的结构模板上。
然后,选择预设的结构模板的预定动作系列,基于预定动作系列的动作,预设的结构模板带动待制作动画的对象做相应的动作。
为了说理简单,且不模糊版本技术发明的关键点,下面仅以儿童用户制作两个简单动画为例,说明制作动画的实现方式。本领域的技术人员可以理解,为了丰富动画的内容,可以增加动作的个数等方式来进行优化。
第一实施例为儿童制作小狗运动的动画的实现方式:
首先,儿童在平板电脑上用手触摸的方式选择对象类型——小狗,将小狗的图片绑定至小狗的结构模板上;
接着,儿童在平板电脑上用手触摸的方式选择情感类型——沮丧,则平板电脑的画面就会出现小狗沮丧的动画:小狗低头垂尾,在原地趴下。
第二实施例为儿童制作小皮球跳动的动画的实现方式:
首先,儿童在平板电脑上用手触摸的方式选择对象轮廓——圆形,将小皮球的图片绑定至圆形的结构模板上;
接着,儿童用在平板电脑上用手触摸的方式选择动作类型——跳动,则平板电脑的画面就会出现小皮球跳动的动画:小皮球往空中跳起,然后 落在地上弹起,再往空中跳起。
其中,用户还可是除了儿童之外的其他用户,例如,青年人、中年人和老人。实际上由于制作动画简单方便、生动有趣,该方法会更加吸引喜欢探索和爱好创作的儿童来使用。这些儿童无需掌握专业的动画师必须掌握的动画原理知识,仅通过简单的选择操作就可以轻松创作出自己喜欢的动画。
在一些可选的实施例中,可以在预设的动画素材数据库按动画对象类型存储有不同类型的结构模板。对象类型可以包括人物、动物、植物及物品等。具体的对象类型可以是老人、小孩、男青年、女青年、猫、狗、鱼、花、鸟和草等。另外,对象类型还可以根据动画制作的需要进行个性化设置,此方面内容不做限制。
在一些可选的实施例中,预设的动画素材数据库可以按动画对象不同轮廓形状存储有相应轮廓形状的结构模板。对象轮廓形状可以是圆形、正方形、长方形、星形、环形等。可以理解,该对象轮廓形状不仅可以是平面的形状,也可以是立体的形状,具体轮廓的形状可以根据动画制作的需要进行个性化设置,此方面内容不做限制。
在一些可选的实施例中,情感类型还可以包括:开心、失望、安静、限怒、哀冷、悲痛、忧愁、愤急、烦闷、恐惧、惊骇、恭敬、抚爱、憎恶、贪欲、嫉妒、傲慢、惭愧及耻辱中的一种或者多种。对象的情感类型可以由对象的动作表现,用户选择对象的情感类型就会调用相应的动作数据。例如,如果动画对象是狗,用户选择开心,狗会摇摇尾巴原地转圈。对象类型和情感类型还可以由装置根据对象的特征,如对象的身高、腿的数量,或对象身体的明暗、颜色,进行自动选择。此处自动选择特别适用于简单的情况,例如,只是简单的把对象分为直立行走的动物和爬行的四足动物两类。单纯根据整体形状下方的分叉数量判断腿的数量,同时用形体的高度辅助判断其是直立状态还是爬行状态,然后简单赋予直立行走或者爬行的动作。
通过本发明实施例,用户无需掌握专业动画原理知识,仅通过简单的操作就可以轻松创作出自己喜欢且具有专业效果的动画,且操作简单方便, 生动有趣、适用范围广。
另外,本发明实施例通过对自行创作的图片进行拍摄,提取图像中的动画对象用于制作动画,简单方便,趣味性强,将绘画与制作动画相结合,特别适合儿童用户使用。
图9是本发明一实施例的制作动画的装置的功能结构示意图900。
如图9所示,制作动画的装置可以包括:动画对象获取单元、动画对象绑定单元和动画制作单元。其中:
动画对象获取单元可以用于获取动画对象。动画对象绑定单元可以用于将动画对象绑定至对应的结构模板上。动画制作单元可以使结构模板动作,以驱动绑定在结构模板上的动画对象做相应的动作。
图10为本发明动画对象获取单元的第一实施例的功能结构示意图1000。如图10所示,动画对象获取单元可以包括:第一结构模板启动模块、第一图像拍摄模块和第一动画对象提取模块。其中:
第一结构模板启动模块可以用于响应于拍摄请求,启动预设的结构模板。第一图像拍摄模块可以用于在将目标拍摄物的图像与预设的结构模板基本匹配之后,拍摄包括目标拍摄物的图像的源图像。第一动画对象提取模块可以用于由源图像中提取处于预设的结构模板区域内的连通的像素群,生成动画对象。
图11为本发明动画对象获取单元的第二实施例的功能结构示意图1100。如图11所示,动画对象获取单元可以包括:第二图像拍摄模块、第二结构模板生成模块和第二动画对象提取模块。其中:
第二图像拍摄模块可以用于摄取包括目标拍摄物的源图像。第二结构模板生成模块可以用于由源图像中提取目标拍摄物的结构,并对结构中的线条进行简化处理,形成自动生成的结构模板。第二动画对象提取模块可以用于由源图像中提取连通的像素群,生成动画对象。
图12为本发明动画对象获取单元的第三实施例的功能结构示意图1200。如图12所示,动画对象获取单元可以包括:第三图像拍摄模块、第三结构模板生成模块和第三动画对象提取模块。其中:
第三图像拍摄模块可以用于响应于拍摄请求,拍摄包括目标拍摄物的 图像的源图像。第三结构模板生成模块可以用于通过人机交互的方式在目标拍摄物的图像中输入关键点的位置,将关键点连通,形成手工生成的结构模板。第三动画对象提取模块可以用于由源图像中提取连通的像素群,生成动画对象。
需要说明的是上述第一图像拍摄模块、第二图像拍摄模块、第三图像拍摄模块以及其它类似模块,可以根据实际需要采用相同的硬件来实现,也可以采用不同的硬件来实现,此方面不做限制。
图13为本发明结构模板绑定单元的第一实施例的功能结构示意图1300。其中,结构模板由句柄节点构成,如图13所示,结构模板绑定单元可以包括:第一网格化处理模块和第一动画对象绑定模块。其中:
第一网格化处理模块可以用于将动画对象进行网格化处理。第一动画对象绑定模块可以用于在网格中选取与结构模板中的句柄节点接近的网格点,利用网格点作为网格形变的约束点,将动画对象绑定至对应的结构模板上。
图14为本发明结构模板绑定单元的第二实施例的功能结构示意图1400。其中,结构模板由骨骼树构成,如图14所示,结构模板绑定单元可以包括:第二网格化处理模块和第二动画对象绑定模块。其中:
第二网格化处理模块可以用于将动画对象进行网格化处理。第二动画对象绑定模块可以用于利用对骨骼树进行蒙皮的方法,将动画对象绑定至对应的结构模板上。
在一些实施例中,动画制作单元动作的方式可以为下列方式的一种或者多种:基于预设的动作使结构模板动作;基于预设的运动规则使结构模板做动作;通过人机交互输入方式,拖动结构模板中的句柄节点或者骨骼树中的节点使结构模板做动作。
在一些实施例中,动画对象可以包括:一个或者多个场景内的目标对象、绘制在绘图平面上的目标图像、摆放在绘图平面上的目标物品。其中,绘图平面包括:绘制有预设背景的绘图卡片,或者以纯色为背景的绘图卡片。
在一些实施例中,制作动画的装置还可以包括:动画对象录制单元、 动画文件生成单元和动画文件显示/存储单元。其中:
动画对象录制单元可以用于逐帧录制动画对象做动作的画面。动画文件生成单元可以用于根据录制的画面生成动画文件,或者根据录制的画面生成动画文件并为动画文件配置背景和/或音频文件。动画文件显示/存储单元可以用于显示和/或存储动画文件。
需要说明的是:上述各实施例中的制作动画的装置可以是制作动画的方法中的执行主体,且制作动画的装置中的各个功能模块分别是为了实现各个方法的相应流程。本发明实施例中可以通过硬件处理器(hardware processor)来实现相关功能模块。各个功能模块只需实现其各自的功能,其具体连接关系不做限制。由于上述实施例的制作动画的装置与制作动画的方法的内容相对应,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的各个功能单元的具体工作流程,可以参考前述方法实施例中的对应流程,在此不再赘述。
图15为本发明制作动画的装置的第一实施例的框架结构示意图1500。如图15所示,制作动画的装置可以包括:存储器、处理器和显示器。其中:
存储器可以用于存放素材数据和程序。处理器可以用于执行存储器存储的程序,程序使得处理器执行以下操作:获取动画对象;将动画对象绑定至对应的结构模板上;使结构模板动作,以驱动绑定在结构模板上的动画对象做相应的动作。显示器可以用于显示动画对象做相应的系列动作。
图16为本发明制作动画的装置的第二实施例的框架结构示意图1600。
如图16所示,该装置可以包括中央处理单元(CPU),其可以根据存储在只读存储器(ROM)中的程序或者从存储部分加载到随机访问存储器(RAM)中的程序而执行各种适当的动作和处理。在RAM中,还存储有装置操作所需的各种程序和数据。CPU、ROM以及RAM通过通信总线彼此相连。输入/输出(I/O)接口也连接至总线。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接 口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
但是,需要明确,本发明并不局限于上文所描述并在图中示出的特定配置和处理。并且,为了简明起见,这里省略对已知方法技术的详细描述。在上述实施例中,描述和示出了若干具体的步骤作为示例。但是,本发明的方法过程并不限于所描述和示出的具体步骤,本领域的技术人员可以在领会本发明的精神之后,作出各种改变、修改和添加,或者改变步骤之间的顺序。
以上所述的结构框图中所示的功能块可以实现为硬件、软件、固件或者它们的组合。当以硬件方式实现时,其可以例如是电子电路、专用集成电路(ASIC)、适当的固件、插件、功能卡等等。当以软件方式实现时,本发明的元素是被用于执行所需任务的程序或者代码段。程序或者代码段可以存储在机器可读介质中,或者通过载波中携带的数据信号在传输介质或者通信链路上传送。“机器可读介质”可以包括能够存储或传输信息的任何介质。机器可读介质的例子包括电子电路、半导体存储器设备、ROM、闪存、可擦除ROM(EROM)、软盘、CD-ROM、光盘、硬盘、光纤介质、射频(RF)链路,等等。代码段可以经由诸如因特网、内联网等的计算机网络被下载。
另外,在本发明各个实施例中的各功能单元或者模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本领域技术人员应能理解,上述实施例均是示例性而非限制性的。在不同实施例中出现的不同技术特征可以进行组合,以取得有益效果。本领域技术人员在研究附图、说明书及权利要求书的基础上,应能理解并实现所揭示的实施例的其他变化的实施例。在权利要求书中,术语“包括”并不排除其他装置或步骤;不定冠词“一个”不排除多个;术语“第一”、 “第二”、“第三”用于标示名称而非用于表示任何特定的顺序。权利要求中的任何附图标记均不应被理解为对保护范围的限制。权利要求中出现的多个部分的功能可以由一个单独的硬件或软件模块来实现。某些技术特征出现在不同的从属权利要求中并不意味着不能将这些技术特征进行组合以取得有益效果。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种制作动画的方法,包括:
    获取动画对象;
    将所述动画对象绑定至对应的结构模板上;
    使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作。
  2. 根据权利要求1所述的方法,其中,所述结构模板为预设的结构模板,所述获取动画对象包括:
    响应于拍摄请求,启动所述预设的结构模板;
    在将目标拍摄物的图像与所述预设的结构模板基本匹配之后,拍摄包括所述目标拍摄物的图像的源图像;
    由所述源图像中提取处于所述预设的结构模板区域内的连通的像素群,生成所述动画对象。
  3. 根据权利要求1所述的方法,其中,所述结构模为自动生成的结构模板,所述获取动画对象包括:
    摄取包括目标拍摄物的源图像;
    由所述源图像中提取目标拍摄物的结构,并对所述结构中的线条进行简化处理,形成自动生成的结构模板;
    由所述源图像中提取连通的像素群,生成所述动画对象。
  4. 根据权利要求1所述的方法,其中,所述结构模版为手工生成的结构模板,所述获取动画对象包括:
    响应于拍摄请求,拍摄包括目标拍摄物的图像的源图像;
    通过人机交互的方式在所述目标拍摄物的图像中输入关键点的位置,将所述关键点连通,形成手工生成的结构模板;
    由所述源图像中提取连通的像素群,生成所述动画对象。
  5. 根据权利要求1-4中任意一项所述的方法,其中,所述结构模板由句柄节点构成,将所述动画对象绑定至对应的结构模板上包括:
    将所述动画对象进行网格化处理;
    在网格中选取与所述结构模板中的句柄节点接近的网格点,利用所述网格点作为所述网格形变的约束点,将所述动画对象绑定至对应的结构模板上。
  6. 根据权利要求5所述的方法,其中,所述结构模板由骨骼树构成,将所述动画对象绑定至对应的结构模板上包括:
    将所述动画对象进行网格化处理;
    利用对所述骨骼树进行蒙皮的方法,将所述动画对象绑定至对应的结构模板上。
  7. 根据权利要求6所述的方法,其中,使所述结构模板动作的方式为下列方式的至少一种:
    基于预设的动作使所述结构模板动作;
    基于预设的运动规则使所述结构模板动作;
    通过人机交互输入方式,拖动所述结构模板中的句柄节点或者骨骼树中的节点使所述结构模板动作。
  8. 根据权利要求1-4中任意一项所述的方法,其中,所述动画对象包括:一个或者多个场景内的目标对象、绘制在绘图平面上的目标图像、摆放在绘图平面上的目标物品,
    其中,所述绘图平面包括:
    具有预设背景的绘图卡片,或者以纯色为背景的绘图卡片。
  9. 根据权利要求1-4中任意一项所述的方法,还包括:
    逐帧录制所述动画对象做动作的画面;
    根据录制的画面生成动画文件;
    显示和/或存储所述动画文件。
  10. 根据权利要求1-4中任意一项所述的方法,还包括:
    为所述动画文件配置背景和/或音频文件。
  11. 一种制作动画的装置,包括:
    动画对象获取单元,用于获取动画对象;
    动画对象绑定单元,用于将所述动画对象绑定至对应的结构模板上;
    动画制作单元,用于使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作。
  12. 根据权利要求11所述的装置,其中,所述结构模板为预设的结构模板,所述动画对象获取单元包括:
    第一结构模板启动模块,用于响应于拍摄请求,启动所述预设的结构模板;
    第一图像拍摄模块,用于在将目标拍摄物的图像与所述预设的结构模板基本匹配之后,拍摄包括所述目标拍摄物的图像的源图像;
    第一动画对象提取模块,用于由所述源图像中提取处于所述预设的结构模板区域内的连通的像素群,生成所述动画对象。
  13. 根据权利要求11所述的装置,其中,所述结构模为自动生成的结构模板,所述动画对象获取单元包括:
    第二图像拍摄模块,用于摄取包括目标拍摄物的源图像;
    第二结构模板生成模块,用于由所述源图像中提取目标拍摄物的结构,并对所述结构中的线条进行简化处理,形成自动生成的结构模板;
    第二动画对象提取模块,用于由所述源图像中提取连通的像素群,生成所述动画对象。
  14. 根据权利要求11所述的装置,其中,所述结构模板为手工生成的结构模板,所述动画对象获取单元包括:
    第三图像拍摄模块,用于响应于拍摄请求,拍摄包括目标拍摄物的图像的源图像;
    第三结构模板生成模块,用于通过人机交互的方式在所述目标拍摄物的图像中输入关键点的位置,将所述关键点连通,形成手工生成的结构模板;
    第三动画对象提取模块,用于由所述源图像中提取连通的像素群,生成所述动画对象。
  15. 根据权利要求11-14中任意一项所述的装置,其中,所述结构模板由句柄节点构成,所述结构模板绑定单元包括:
    第一网格化处理模块,用于将所述动画对象进行网格化处理;
    第一动画对象绑定模块,用于在网格中选取与所述结构模板中的句柄节点接近的网格点,利用所述网格点作为所述网格形变的约束点,将所述动画对象绑定至对应的结构模板上。
  16. 根据权利要求15所述的装置,其中,所述结构模板由骨骼树构成,所述结构模板绑定单元包括:
    第二网格化处理模块,用于将所述动画对象进行网格化处理;
    第二动画对象绑定模块,用于利用对所述骨骼树进行蒙皮的方法,将所述动画对象绑定至对应的结构模板上。
  17. 根据权利要求16所述的装置,其中,所述动画制作单元动作的方式为下列方式的至少一种:
    基于预设的动作使所述结构模板动作;
    基于预设的运动规则使所述结构模板做动作;
    通过人机交互输入方式,拖动所述结构模板中的句柄节点或者骨骼树中的节点使所述结构模板做动作。
  18. 根据权利要求11-14中任意一项所述的装置,其中,所述动画对象包括:一个或者多个场景内的目标对象、绘制在绘图平面上的目标图像、摆放在绘图平面上的目标物品,
    其中,所述绘图平面包括:
    具有预设背景的绘图卡片,或者以纯色为背景的绘图卡片。
  19. 根据权利要求11-14中任意一项所述的装置,还包括:
    动画对象录制单元,用于逐帧录制所述动画对象做动作的画面;
    动画文件生成单元,用于根据录制的画面生成动画文件,或者根据录制的画面生成动画文件并为所述动画文件配置背景和/或音频文件;
    动画文件显示/存储单元,用于显示和/或存储所述动画文件。
  20. 一种制作动画的装置,包括:
    存储器,用于存放素材数据和程序;
    处理器,用于执行所述存储器存储的程序,所述程序使得所述处理器执行以下操作:
    获取动画对象;
    将所述动画对象绑定至对应的结构模板上;
    使所述结构模板动作,以驱动绑定在所述结构模板上的所述动画对象做相应的动作;
    显示器,用于显示所述动画对象做相应的动作。
PCT/CN2017/092940 2016-08-01 2017-07-14 制作动画的方法和装置 WO2018024089A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020197003184A KR20190025691A (ko) 2016-08-01 2017-07-14 동영상을 제작하는 방법 및 장치
US16/318,202 US20190251730A1 (en) 2016-08-01 2017-07-14 Method and apparatus for making an animation
EP17836271.1A EP3471062A4 (en) 2016-08-01 2017-07-14 ANIMATION GENERATION METHOD AND DEVICE
JP2019524499A JP2019528544A (ja) 2016-08-01 2017-07-14 動画を制作する方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610622304.0A CN106251389B (zh) 2016-08-01 2016-08-01 制作动画的方法和装置
CN201610622304.0 2016-08-01

Publications (1)

Publication Number Publication Date
WO2018024089A1 true WO2018024089A1 (zh) 2018-02-08

Family

ID=57605851

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092940 WO2018024089A1 (zh) 2016-08-01 2017-07-14 制作动画的方法和装置

Country Status (6)

Country Link
US (1) US20190251730A1 (zh)
EP (1) EP3471062A4 (zh)
JP (1) JP2019528544A (zh)
KR (1) KR20190025691A (zh)
CN (1) CN106251389B (zh)
WO (1) WO2018024089A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445558A (zh) * 2020-03-23 2020-07-24 华强方特(深圳)动漫有限公司 一种应用Alembic格式的三维制作方法
CN112184863A (zh) * 2020-10-21 2021-01-05 网易(杭州)网络有限公司 一种动画数据的处理方法和装置
CN111951360B (zh) * 2020-08-14 2023-06-23 腾讯科技(深圳)有限公司 动画模型处理方法、装置、电子设备及可读存储介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251389B (zh) * 2016-08-01 2019-12-24 北京小小牛创意科技有限公司 制作动画的方法和装置
CN107391144B (zh) * 2017-07-27 2020-01-03 武汉斗鱼网络科技有限公司 视图展示方法及装置
CN108170782A (zh) * 2017-12-26 2018-06-15 郑州威科姆科技股份有限公司 一种教学动画资源批量生成系统
CN108921919A (zh) * 2018-06-08 2018-11-30 北京小小牛创意科技有限公司 动画展示、制作方法及装置
CN111640176A (zh) * 2018-06-21 2020-09-08 华为技术有限公司 一种物体建模运动方法、装置与设备
CN109684487A (zh) * 2018-11-06 2019-04-26 北京小小牛创意科技有限公司 媒体文件及其生成方法和播放方法
US10643365B1 (en) * 2018-11-20 2020-05-05 Adobe Inc. Deformation mesh control for a computer animated artwork
CN110211208A (zh) * 2019-06-06 2019-09-06 山西师范大学 一种3dmax动画辅助制作系统
CN113345057A (zh) * 2020-02-18 2021-09-03 京东方科技集团股份有限公司 动画形象的生成方法、设备及存储介质
CN111968201A (zh) * 2020-08-11 2020-11-20 深圳市前海手绘科技文化有限公司 一种基于手绘素材的手绘动画素材生成方法
CN112991500A (zh) * 2021-03-12 2021-06-18 广东三维家信息科技有限公司 一种家装影视动画方法、装置、电子设备及存储介质
CN113050795A (zh) * 2021-03-24 2021-06-29 北京百度网讯科技有限公司 虚拟形象的生成方法及装置
CN113546415B (zh) * 2021-08-11 2024-03-29 北京字跳网络技术有限公司 剧情动画播放方法、生成方法、终端、装置及设备
CN114642863A (zh) * 2022-03-16 2022-06-21 温州大学 一种用于幼儿园的户外体育游戏系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271593A (zh) * 2008-04-03 2008-09-24 石家庄市桥西区深度动画工作室 一种3Dmax动画辅助制作系统
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101968892A (zh) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 根据一张人脸照片自动调整三维人脸模型的方法
US20120218262A1 (en) * 2009-10-15 2012-08-30 Yeda Research And Development Co. Ltd. Animation of photo-images via fitting of combined models
WO2012167475A1 (zh) * 2011-07-12 2012-12-13 华为技术有限公司 生成形体动画的方法及装置
CN104408775A (zh) * 2014-12-19 2015-03-11 哈尔滨工业大学 基于深度感知的三维皮影戏制作方法
CN105608934A (zh) * 2015-12-21 2016-05-25 大连新锐天地传媒有限公司 Ar儿童故事早教舞台剧系统
CN106251389A (zh) * 2016-08-01 2016-12-21 北京小小牛创意科技有限公司 制作动画的方法和装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200540732A (en) * 2004-06-04 2005-12-16 Bextech Inc System and method for automatically generating animation
WO2009031155A2 (en) * 2007-09-06 2009-03-12 Yeda Research And Development Co. Ltd. Modelization of objects in images
US8565476B2 (en) * 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US20140267425A1 (en) * 2013-03-15 2014-09-18 Crayola Llc Personalized Digital Animation Kit
US20160019708A1 (en) * 2014-07-17 2016-01-21 Crayola, Llc Armature and Character Template for Motion Animation Sequence Generation
CN105447047B (zh) * 2014-09-02 2019-03-15 阿里巴巴集团控股有限公司 建立拍照模板数据库、提供拍照推荐信息的方法及装置
CN104978758A (zh) * 2015-06-29 2015-10-14 世优(北京)科技有限公司 基于用户创作的图像的动画视频生成方法和装置
CN105204859B (zh) * 2015-09-24 2018-09-25 广州视睿电子科技有限公司 动画管理方法及其系统
CN105447896A (zh) * 2015-11-14 2016-03-30 华中师范大学 一种幼儿动画创作系统
CN105446682A (zh) * 2015-11-17 2016-03-30 厦门正景智能工程有限公司 一种通过投影将儿童涂画转换为动画仿真互动展示系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101271593A (zh) * 2008-04-03 2008-09-24 石家庄市桥西区深度动画工作室 一种3Dmax动画辅助制作系统
CN101968892A (zh) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 根据一张人脸照片自动调整三维人脸模型的方法
US20120218262A1 (en) * 2009-10-15 2012-08-30 Yeda Research And Development Co. Ltd. Animation of photo-images via fitting of combined models
WO2012167475A1 (zh) * 2011-07-12 2012-12-13 华为技术有限公司 生成形体动画的方法及装置
CN104408775A (zh) * 2014-12-19 2015-03-11 哈尔滨工业大学 基于深度感知的三维皮影戏制作方法
CN105608934A (zh) * 2015-12-21 2016-05-25 大连新锐天地传媒有限公司 Ar儿童故事早教舞台剧系统
CN106251389A (zh) * 2016-08-01 2016-12-21 北京小小牛创意科技有限公司 制作动画的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3471062A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445558A (zh) * 2020-03-23 2020-07-24 华强方特(深圳)动漫有限公司 一种应用Alembic格式的三维制作方法
CN111445558B (zh) * 2020-03-23 2023-05-16 华强方特(深圳)动漫有限公司 一种应用Alembic格式的三维制作方法
CN111951360B (zh) * 2020-08-14 2023-06-23 腾讯科技(深圳)有限公司 动画模型处理方法、装置、电子设备及可读存储介质
CN112184863A (zh) * 2020-10-21 2021-01-05 网易(杭州)网络有限公司 一种动画数据的处理方法和装置
CN112184863B (zh) * 2020-10-21 2024-03-15 网易(杭州)网络有限公司 一种动画数据的处理方法和装置

Also Published As

Publication number Publication date
EP3471062A4 (en) 2020-03-11
CN106251389B (zh) 2019-12-24
JP2019528544A (ja) 2019-10-10
US20190251730A1 (en) 2019-08-15
KR20190025691A (ko) 2019-03-11
EP3471062A1 (en) 2019-04-17
CN106251389A (zh) 2016-12-21

Similar Documents

Publication Publication Date Title
WO2018024089A1 (zh) 制作动画的方法和装置
US11600033B2 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
US10776981B1 (en) Entertaining mobile application for animating a single image of a human body and applying effects
Kazi et al. Draco: bringing life to illustrations with kinetic textures
KR20210019552A (ko) 객체 모델링 및 움직임 방법 및 장치, 그리고 기기
CN108062796B (zh) 基于移动终端的手工制品与虚拟现实体验系统及方法
US9262853B2 (en) Virtual scene generation based on imagery
US20180158226A1 (en) Object creation using body gestures
CN112669414B (zh) 动画数据的处理方法及装置、存储介质、计算机设备
Matsui et al. DrawFromDrawings: 2D drawing assistance via stroke interpolation with a sketch database
Smith et al. A method for animating children’s drawings of the human figure
CN113838158A (zh) 一种图像和视频的重构方法、装置、终端设备及存储介质
Pantuwong A tangible interface for 3D character animation using augmented reality technology
KR20210134229A (ko) 이미지 증강을 위한 방법 및 전자 장치
Cai et al. Immersive interactive virtual fish swarm simulation based on infrared sensors
Gouvatsos 3D storyboarding for modern animation.
Figueroa et al. A pen and paper interface for animation creation
US11410368B2 (en) Animation control rig generation
US11450054B2 (en) Method for operating a character rig in an image-generation system using constraints on reference nodes
Yao et al. ShadowMaker: Sketch-Based Creation Tool for Digital Shadow Puppetry
Wang et al. Animation Generation Technology Based on Deep Learning: Opportunities and Challenges
Mullen et al. Blender studio projects: digital movie-making
Kundert-Gibbs et al. Maya® Secrets of the ProsTM
Shiratori User Interfaces for Character Animation and Character Interaction
PENG Sketch2Motion: Sketch-Based Interface for Human Motions Retrieval and Character Animation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17836271

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019524499

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017836271

Country of ref document: EP

Effective date: 20190111

ENP Entry into the national phase

Ref document number: 20197003184

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE