CN115619910A - Method and device for realizing puppet animation control by binding animation control nodes - Google Patents
Method and device for realizing puppet animation control by binding animation control nodes Download PDFInfo
- Publication number
- CN115619910A CN115619910A CN202211324492.0A CN202211324492A CN115619910A CN 115619910 A CN115619910 A CN 115619910A CN 202211324492 A CN202211324492 A CN 202211324492A CN 115619910 A CN115619910 A CN 115619910A
- Authority
- CN
- China
- Prior art keywords
- texture
- triangular mesh
- result
- triangular
- control points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 claims abstract description 48
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 238000009877 rendering Methods 0.000 claims abstract description 12
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 claims description 3
- 208000034189 Sclerosis Diseases 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 8
- 238000003860 storage Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006911 nucleation Effects 0.000 description 2
- 238000010899 nucleation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 230000003090 exacerbative effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a method and a device for realizing puppet animation control by binding animation control nodes, wherein the method comprises the steps of obtaining original textures of an input image, preprocessing the original textures to obtain result textures, carrying out contour detection to obtain a contour detection result, and carrying out gridding operation to obtain a first triangular grid; carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh; and (4) rendering the second triangular mesh by combining the preset level control points, the original texture and the first triangular mesh, and outputting a texture result. According to the method, after the image content relation grid is established for the picture, the position control points, the hardening control points and the level control points are added to perform rigid deformation on the plane grid, so that the deformation animation with high degree of freedom and high reality is manufactured. Compared with the existing deformation algorithm, the grid-based mutual influence relation is increased, and the deformation is nonlinear, so that the deformation animation edited by the user has more real and vivid sense.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for realizing puppet animation control by binding animation control nodes.
Background
The puppet animation is characterized in that a user can create a key frame animation which is fine-tuned in shape and naturally vivid in deformation by freely adding control points in a picture through analyzing the content of non-transparent pixels in a plane picture, automatically and efficiently extracting a main body and establishing a content relation grid. The effect is one of hot effects in the dynamic special effect processing of the picture, and is a special effect processing which has long demand.
However, almost all video image editing applications in the current mobile terminal market can only provide affine deformation, and the processing technology can only perform unified processing on the whole picture, does not extract or process the main object content in the picture, and cannot freely adjust a certain part of the picture. Although the method can also be used for animation based on key frames, the effect is stiff, stiff and inflexible, and can not meet the real requirements of users. In addition, the application is realized by a technology similar to liquefaction, but the method is difficult to control, has high requirement on operation precision, and cannot perform key frame animation, namely cannot really complete the dynamic effect to change pictures into videos, so that the requirement of users for making dynamic contents cannot be met.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for implementing puppet animation control by binding animation control nodes to overcome the defects in the prior art, so as to solve the problem that the requirement of a user for creating dynamic content cannot be met in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme: a method for realizing puppet animation control by binding animation control nodes comprises the following steps:
acquiring original textures of an input image, and preprocessing the original textures to obtain result textures;
carrying out contour detection on the result texture to obtain a contour detection result with an inclusion relation and a hierarchical structure, and carrying out gridding operation on the contour detection result to obtain a first triangular grid;
carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh;
and (4) rendering the second triangular mesh by combining the preset level control points, the original texture and the first triangular mesh, and outputting a texture result.
Further, preprocessing the original texture to obtain a result texture, including:
performing main body segmentation on the original texture to obtain a segmentation mask texture;
carrying out Gaussian blur processing on the segmentation mask texture to obtain Gaussian blur texture;
carrying out binarization processing on the Gaussian fuzzy texture to obtain a binarization texture;
and performing morphological operation on the binarization texture to obtain a result texture.
Further, performing a gridding operation on the contour detection result to obtain a first triangular grid, including:
traversing each element of the contour detection result, and then traversing each element in the contour detection result according to a hierarchical order to obtain a shape division result;
traversing each element in the shape division result, and carrying out merging and shape approximation on the outline of each element in the shape division result to obtain a merging result;
and calculating the merging result by adopting a triangulation algorithm to obtain a first triangular mesh.
Further, one element of the position control points represents a target position of a corresponding mesh vertex in the first triangular mesh, and one element of the hardened control points represents a weight of a corresponding triangle in the first triangular mesh; adopt preset position accuse point and sclerosis accuse point to carry out the meshing operation to first triangular mesh, obtain second triangular mesh, include:
judging whether the grid vertex of the first triangular grid appears in the position control point, dividing the non-appeared grid vertex into a first group, and dividing the appeared grid vertex into a second group;
carrying out expansion processing on vertexes in the first group and the second group to obtain expansion vectors, and determining coordinates to be solved and constraint coordinates according to the expansion vectors;
and solving the minimum value of the loss function according to the loss function to obtain a solved value, and determining the solved value as a second triangular grid.
Furthermore, one element in the hierarchical control point represents one position information, one depth value and one influence range; the step of combining the preset level control points, the original texture and the first triangular mesh to render the second triangular mesh and output a texture result comprises the following steps:
dividing the original texture into a plurality of triangular textures according to the first triangular mesh;
mapping the plurality of triangular textures to a second triangular mesh according to the mapping relation between the first triangular mesh and the second triangular mesh;
calculating the depth information of each triangle in the second triangular mesh according to the level control points;
sequencing the drawing sequence of the second triangular mesh according to the depth information to obtain output texture; wherein, the larger the depth information, the more backward the drawing.
Further, the mapping the plurality of triangle textures into the second triangular mesh includes:
determining coordinate points in the original triangle;
determining the value of the coordinate point in a coordinate system formed by the vector of the vertex of the triangle;
determining a new coordinate obtained after the coordinate point is deformed;
a new value of the new coordinates is calculated.
Further, the calculating the depth information of each triangle in the second triangular mesh according to the hierarchical control points includes:
traversing each triangle in the first triangular mesh, and executing the following steps for each triangle:
initializing depth information of a triangle; wherein the initialized depth information is zero;
traversing the hierarchical control points, judging whether the distance between the center of each triangle and the current hierarchical control point is smaller than a preset value, and if so, adding the depth information of the triangle to the depth information of the current hierarchical control point.
Further, the loss function is solved by adopting a Levenberg-Marquardt algorithm.
The embodiment of the application provides a device for realizing puppet animation control by binding animation control nodes, which comprises:
the acquisition module is used for acquiring the original texture of an input image and preprocessing the original texture to obtain a result texture;
the operation module is used for carrying out contour detection on the result texture to obtain a contour detection result with an inclusion relation and a hierarchical structure, and carrying out gridding operation on the contour detection result to obtain a first triangular grid;
the processing module is used for carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh;
and the output module is used for rendering the second triangular mesh by combining the preset hierarchical control points, the original texture and the first triangular mesh and outputting a texture result.
The embodiment of the application provides a mobile terminal, include: the device comprises a memory and a processor, wherein the memory stores a computer program, and the computer program is used for causing the processor to execute the steps of any one of the methods for realizing puppet animation control by binding animation control nodes.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
the invention provides a method and a device for realizing puppet animation control by binding animation control nodes. The position control point can carry out distortion deformation on the picture content at the position of the control point; the hardening control point can solidify the picture content of the area where the hardening control point is located and is used for reducing or intensifying the deformation degree of the designated image area; and the level control points can adjust the rendering level sequence of the divided deformation areas. Compared with the existing deformation algorithm, the method and the device have the advantages that the mutual influence relation based on grids is increased, and the nonlinear deformation is realized, so that the deformation animation edited by a user is more real and vivid.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating steps of a method for implementing puppet animation control by binding animation control nodes according to the present invention;
FIG. 2 is a flow chart illustrating a method for implementing puppet animation control by binding animation control nodes according to the present invention;
FIG. 3 is a schematic structural diagram of the device for implementing puppet animation control by binding animation control nodes according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific method and apparatus for implementing puppet animation control by binding animation control nodes provided in the embodiments of the present application are described below with reference to the accompanying drawings.
As shown in fig. 1, a method for implementing puppet animation control by binding animation control nodes provided in the embodiment of the present application includes:
s101, acquiring original textures of an input image, and preprocessing the original textures to obtain result textures;
it should be noted that the method provided by the present application may be implemented in a mobile terminal, and the mobile terminal may be a smart phone, a tablet computer, or a handheld terminal. The acquired input image can be obtained by shooting through a camera on the mobile terminal or an album stored in the mobile terminal. The original texture of the input image can be obtained through the input image, then the original texture is preprocessed, and the result texture is obtained after preprocessing
S102, carrying out contour detection on the result texture to obtain a contour detection result with an inclusion relation and a hierarchical structure, and carrying out gridding operation on the contour detection result to obtain a first triangular grid;
and then carrying out contour detection on the result texture to obtain a contour detection result, and carrying out gridding operation on the contour detection result to obtain a first triangular grid.
S103, carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh;
the method comprises the following steps that a user can set a position control point and a hardening control point on a mobile terminal, wherein the position control point can carry out distortion deformation on picture content at the position of the control point; the hardened control points may solidify the picture content of the area in which they are located, for reducing or exacerbating the extent of distortion of the designated image area. And carrying out gridding operation on the first triangular mesh through the position control point and the hardening control point to obtain a second triangular mesh.
And S104, rendering the second triangular mesh by combining the preset hierarchical control points, the original texture and the first triangular mesh, and outputting a texture result.
The level control points can adjust the rendering level sequence of the divided deformation areas, and then the second triangular mesh is rendered to obtain a final texture result.
The working principle of the method for realizing puppet animation control by binding animation control nodes is as follows: referring to fig. 2, the original texture of the input image can be obtained by taking a picture through a camera on the mobile terminal or by obtaining the input image through an album stored in the mobile terminal. The method comprises the steps of preprocessing original textures to obtain result textures, then carrying out contour detection on the result textures to obtain a contour detection result, carrying out gridding operation on the contour detection result to obtain a first triangular mesh, carrying out gridding operation on the first triangular mesh through position control points and hardening control points which are arranged on a mobile end by a user to obtain a second triangular mesh, carrying out rendering processing on the second triangular mesh through level control points which are arranged on the mobile end by the user, the obtained original textures and the first triangular mesh, and outputting a texture result.
In some embodiments, pre-processing the original texture to obtain a result texture comprises:
performing main body segmentation on the original texture to obtain a segmentation mask texture;
performing Gaussian blur processing on the segmentation mask texture to obtain Gaussian blur texture;
carrying out binarization processing on the Gaussian fuzzy texture to obtain a binarization texture;
and performing morphological operation on the binarization texture to obtain a result texture.
Specifically, the module is configured to perform image body segmentation on an input original texture a to obtain a segmentation mask texture B, where for each pixel in the segmentation mask texture B, a segmentation value x of 0 to 255 is given to indicate a probability that the pixel belongs to a segmentation subject, and the larger the value is, the higher the probability is. The specific image main body segmentation algorithm is used for obtaining a segmentation model through training based on a self-developed and improved UNet network, and performing texture segmentation.
And then, carrying out Gaussian blur processing on the segmentation mask texture B to obtain a Gaussian blur texture C.
And carrying out binarization processing on the Gaussian blur texture C to obtain a binarization texture D. And setting a threshold value t, and carrying out binarization processing on each segmentation probability x in the Gaussian fuzzy texture C to obtain x1. The concrete formula is as follows:
preferably, 31 is used as threshold t in the present invention.
And performing morphological operation on the binarization texture D to obtain a result texture E. The result is texture E as output. The morphological operation comprises the following specific steps:
first, 3 times of nucleation are performedThe etching operation of (3) to eliminate fine burrs and smoothly divide the mask;
then, 5 times of nuclear reactionThe expansion operation is performed to eliminate tiny holes, reduce the time overhead of subsequent grid construction, improve the grid stability and avoid overfitting;
finally, 2 times of nucleation are carried outSo that the total area of the segmented image remains stable.
In some embodiments, performing a gridding operation on the contour detection result to obtain a first triangular mesh includes:
traversing each element of the contour detection result, and then traversing each element in the contour detection result according to a hierarchical order to obtain a shape division result;
traversing each element in the shape division result, and carrying out merging and shape approximation on the outline of each element in the shape division result to obtain a merging result;
and calculating the combination result by adopting a triangular mesh generation algorithm to obtain a first triangular mesh.
Specifically, after The result texture is obtained, contour detection is firstly performed to obtain a contour detection result H with inclusion relationship and a hierarchical structure. The contour detection result H is an array, each element contour0 in the array represents a contour not included by any contour, i.e. an outermost contour, each contour0 may further include another child contour1, the child contour1 may continue to nest the child contour2 of the child contour1, and so on.
And further processing the contour detection result H to obtain a shape division result K. The method comprises the following specific steps:
traversing each element in the contour detection result H, then traversing each element in the contour detection result H according to the sequence of the sub-contour included in the element, wherein the contour of the even layer is proposed as the outer contour, and the next contour of the odd layer of the contour of the even layer is used as the hole of the contour of the even layer. And traversing all elements in the contour detection result H to obtain a shape division result K, wherein each element in the shape division result K is a shape represented by a two-layer contour structure, the upper layer is the contour of the shape, and the lower layer is the contour of a hole in the shape.
And traversing each element in the shape division result K, and carrying out merging and shape approximation on the outline of each element in the K to obtain a merging result K2. The purpose of this step is to reduce the number of nodes in the contour, thereby reducing the number of triangular meshes in the subsequent mesh construction and reducing the performance pressure.
And executing a triangulation algorithm for each shape of the combination result K2 to obtain a first triangular Mesh which is uniformly subdivided and used as output.
In some embodiments, one element of the position control points represents a target position of a corresponding one of the mesh vertices in the first triangular mesh, and one element of the hardening control points represents a weight of a corresponding one of the triangles in the first triangular mesh; adopt preset position accuse point and sclerosis accuse point to carry out the meshing operation to first triangular mesh, obtain second triangular mesh, include:
judging whether the grid vertex of the first triangular grid appears in the position control point, dividing the non-appeared grid vertex into a first group, and dividing the appeared grid vertex into a second group;
carrying out expansion processing on vertexes in the first group and the second group to obtain expansion vectors, and determining a coordinate to be solved and a constraint coordinate according to the expansion vectors;
and solving the minimum value of the loss function according to the loss function to obtain a solved value, and determining the solved value as a second triangular grid.
Specifically, after the first triangular Mesh is obtained, a group of position control points posist manually input by a user, wherein one element of the posist represents a target position of a certain Mesh vertex in the triangular Mesh; the following operations are then carried out:
and dividing the vertex of the triangular Mesh into two groups according to whether the vertex appears in the poslist, and respectively recording the two groups as a first group alist and a second group blist, wherein the first group alist represents the vertex position to be solved, and the second group blist represents the vertex position which is input by a user and is anchored. At this time, the problem becomes: given the information of some points in the mesh and the connection relation (the formed triangular shape) of the vertexes in the mesh in the original state, only the constraint needs to be constructed, and the positions of the rest free vertexes after deformation are solved.
Specifically, vertex in alist and blist is expanded to obtain vector form of [ alist1.X, alist1.Y \8230 ], alist N.x, alist N.y, blist1.X, blist1.Y \8230, blist M.x, blist M.y ], wherein the first N2 item is coordinate value to be solved, and the last M2 item is constraint coordinate.
And defining a loss function E, and solving the minimum value of the function to obtain a final value to be solved.
E=E edges +E radians
Wherein,by solving each edge vector in the original grid,for the edge vectors, θ, of the grid to be solved i Is the angle formed by two sides of a certain vertex of the triangle, theta i ' is the angle formed by two sides of a certain vertex of a triangle of the mesh to be solved.
The above formula is a least square type, and generally, such an optimal solution problem can be solved through an optimization algorithm such as a newton method, a quasi-newton method, a gauss-newton method, and the like. In consideration of performance and effect, the Levenberg-Marquardt algorithm is adopted to solve the least square problem, and the deformed second triangular Mesh2 is obtained.
In some embodiments, one element in the hierarchical control point represents one position information, depth value and influence range; the step of combining the preset level control points, the original texture and the first triangular mesh to render the second triangular mesh and output a texture result comprises the following steps:
dividing the original texture into a plurality of triangular textures according to the first triangular mesh;
mapping the plurality of triangular textures to a second triangular mesh according to the mapping relation between the first triangular mesh and the second triangular mesh;
calculating the depth information of each triangle in the second triangular mesh according to the level control points;
sequencing the drawing sequence of the second triangular mesh according to the depth information to obtain output texture; wherein, the larger the depth information, the more backward the drawing.
Preferably, the calculating the depth information of each triangle in the second triangular mesh according to the hierarchical control point includes:
traversing each triangle in the first triangular mesh, and executing the following steps for each triangle:
initializing depth information of a triangle; wherein the initialization depth information is zero;
and traversing the hierarchical control points, judging whether the distance between the center of each triangle and the current hierarchical control point is smaller than a preset value, and if so, adding the depth information of the triangle to the depth information of the current hierarchical control point.
In some embodiments, said texture mapping a plurality of said triangles into said second triangular mesh comprises:
determining coordinate points in the original triangle;
determining the value of the coordinate point in a coordinate system formed by the vector of the vertex of the triangle;
determining a new coordinate obtained after the coordinate point is deformed;
a new value of the new coordinates is calculated.
Specifically, the application already has an original texture a, a first triangular Mesh and a second triangular Mesh2, and then obtains a hierarchical control point array depthlist input by a user, wherein each element in the depthlist is position information (x, y), a depth value z and an influence range radius; then the following treatments were carried out:
according to the original texture A and the first triangular Mesh, the original texture A is divided into small triangular textures triangleTextureList.
And mapping the triangleTextureList into the second triangular Mesh2 according to the mapping relation between the first triangular Mesh and the second triangular Mesh2. The specific mapping rule is as follows:
assume that a certain point (X, Y) is in the original triangle P0, P1, P2, andthe values in the constructed coordinate system are (a, b), then this point is deformed, and the new values (X ', Y ') calculated from the new point coordinates P0', P1', P2' are the
And calculating the depth information of each triangle in the second triangular Mesh2 according to the hierarchical control point depthlist.
And sequencing the triangles of the Mesh2 in the drawing sequence according to the depth information of the triangles in the second triangular Mesh2, and drawing the triangles with larger depth information. Resulting in the final output texture F.
The method comprises the following specific steps:
traversing each triangle in the Mesh, and executing the following steps for each triangle:
initializing the depth information of the triangle to be 0;
traversing the hierarchical control point depthlist, judging whether the distance from the center of the triangle to the current hierarchical control point is less than radius, if so, adding the depth information of the triangle to the depth information of the current hierarchical control point.
As shown in fig. 3, an apparatus for implementing puppet animation control by binding animation control nodes according to an embodiment of the present application includes:
an obtaining module 301, configured to obtain an original texture of an input image, and perform preprocessing on the original texture to obtain a result texture;
an operation module 302, configured to perform contour detection on the result texture to obtain a contour detection result with an inclusion relationship and a hierarchical structure, and perform a meshing operation on the contour detection result to obtain a first triangular mesh;
the processing module 303 is configured to perform meshing operation on the first triangular mesh by using preset position control points and preset hardening control points to obtain a second triangular mesh;
and the output module 304 is configured to perform rendering processing on the second triangular mesh by combining the preset hierarchical control point, the original texture, and the first triangular mesh, and output a texture result.
The device for realizing puppet animation control by binding animation control nodes provided by the application has the working principle that the acquisition module 301 acquires original textures of input images, and preprocesses the original textures to obtain result textures; the operation module 302 performs contour detection on the result texture to obtain a contour detection result with an inclusion relationship and a hierarchical structure, and performs gridding operation on the contour detection result to obtain a first triangular grid; the processing module 303 performs a meshing operation on the first triangular mesh by using preset position control points and hardening control points to obtain a second triangular mesh; the output module 304 performs rendering processing on the second triangular mesh by combining the preset level control points, the original texture and the first triangular mesh, and outputs a texture result.
The application provides a remove end, includes: a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of the method for realizing puppet animation control by binding animation control nodes, provided by any of the above embodiments:
acquiring original textures of an input image, and preprocessing the original textures to obtain result textures;
carrying out contour detection on the result texture to obtain a contour detection result with an inclusion relation and a hierarchical structure, and carrying out gridding operation on the contour detection result to obtain a first triangular grid;
carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh;
and (4) rendering the second triangular mesh by combining the preset level control points, the original texture and the first triangular mesh, and outputting a texture result.
Compared with the existing deformation algorithm realized by the mobile terminal through affine transformation, the method can realize local flexible transformation through constructing the triangular mesh and constraining, and the effect is natural and not rigid.
Compared with the existing deformation algorithm realized by using liquefaction at the mobile terminal, the method and the system have the advantages that the operation and the editing are carried out by using the position control point, the hardening control point and the overlapping control point, the operation difficulty of a user is greatly simplified, and the convenience, the accuracy and the reliability are improved.
It can be understood that the method embodiments provided above correspond to the apparatus embodiments described above, and corresponding specific contents may be referred to each other, which are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. A method for realizing puppet animation control by binding animation control nodes is characterized by comprising the following steps:
acquiring original textures of an input image, and preprocessing the original textures to obtain result textures;
carrying out contour detection on the result texture to obtain a contour detection result with an inclusion relation and a hierarchical structure, and carrying out gridding operation on the contour detection result to obtain a first triangular grid;
carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh;
and (4) rendering the second triangular mesh by combining the preset level control points, the original texture and the first triangular mesh, and outputting a texture result.
2. The method of claim 1, wherein preprocessing the original texture to obtain a resultant texture comprises:
performing main body segmentation on the original texture to obtain a segmentation mask texture;
performing Gaussian blur processing on the segmentation mask texture to obtain Gaussian blur texture;
carrying out binarization processing on the Gaussian fuzzy texture to obtain a binarization texture;
and performing morphological operation on the binarization texture to obtain a result texture.
3. The method according to claim 1 or 2, wherein performing a gridding operation on the contour detection result to obtain a first triangular grid comprises:
traversing each element of the contour detection result, and then traversing each element in the contour detection result according to a hierarchical order to obtain a shape division result;
traversing each element in the shape division result, and carrying out merging and shape approximation on the outline of each element in the shape division result to obtain a merging result;
and calculating the combination result by adopting a triangular mesh generation algorithm to obtain a first triangular mesh.
4. The method of claim 1, wherein one of the position control points represents a target position of a corresponding one of the mesh vertices in the first triangular mesh, and one of the hardened control points represents a weight of a corresponding one of the triangles in the first triangular mesh; adopt preset position accuse point and sclerosis accuse point to carry out the meshing operation to first triangular mesh, obtain second triangular mesh, include:
judging whether the grid vertex of the first triangular grid appears in the position control point, dividing the non-appeared grid vertex into a first group, and dividing the appeared grid vertex into a second group;
carrying out expansion processing on vertexes in the first group and the second group to obtain expansion vectors, and determining a coordinate to be solved and a constraint coordinate according to the expansion vectors;
and solving the minimum value of the loss function according to the loss function to obtain a solved value, and determining the solved value as a second triangular grid.
5. The method of claim 1, wherein one element in the hierarchical control point represents one position information, depth value and influence range; the step of combining the preset level control points, the original texture and the first triangular mesh to render the second triangular mesh and output a texture result comprises the following steps:
segmenting the original texture into a plurality of triangular textures according to the first triangular mesh;
mapping the plurality of triangular textures to a second triangular mesh according to the mapping relation between the first triangular mesh and the second triangular mesh;
calculating the depth information of each triangle in the second triangular mesh according to the level control points;
sequencing the drawing sequence of the second triangular mesh according to the depth information to obtain output texture; wherein, the larger the depth information, the more backward the drawing.
6. The method of claim 5, wherein mapping the plurality of triangle textures into the second triangular mesh comprises:
determining coordinate points in the original triangle;
determining the value of the coordinate point in a coordinate system formed by vectors of the vertexes of the triangle;
determining a new coordinate obtained after the coordinate point is deformed;
a new value of the new coordinates is calculated.
7. The method of claim 6, wherein said computing depth information for each triangle in the second triangular mesh according to the hierarchical control points comprises:
traversing each triangle in the first triangular mesh, performing the following steps for each triangle:
initializing depth information of a triangle; wherein the initialization depth information is zero;
and traversing the hierarchical control points, judging whether the distance between the center of each triangle and the current hierarchical control point is smaller than a preset value, and if so, adding the depth information of the triangle to the depth information of the current hierarchical control point.
8. The method of claim 4,
the loss function is solved using the Levenberg-Marquardt algorithm.
9. A device for realizing puppet animation control by binding animation control nodes is characterized by comprising:
the acquisition module is used for acquiring the original texture of an input image and preprocessing the original texture to obtain a result texture;
the operation module is used for carrying out contour detection on the result texture to obtain a contour detection result with an inclusion relation and a hierarchical structure, and carrying out gridding operation on the contour detection result to obtain a first triangular grid;
the processing module is used for carrying out gridding operation on the first triangular mesh by adopting preset position control points and hardening control points to obtain a second triangular mesh;
and the output module is used for performing rendering processing on the second triangular mesh by combining the preset level control points, the original texture and the first triangular mesh and outputting a texture result.
10. A mobile terminal, comprising: memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of implementing puppet animation manipulation by binding animation control nodes according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211324492.0A CN115619910A (en) | 2022-10-27 | 2022-10-27 | Method and device for realizing puppet animation control by binding animation control nodes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211324492.0A CN115619910A (en) | 2022-10-27 | 2022-10-27 | Method and device for realizing puppet animation control by binding animation control nodes |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115619910A true CN115619910A (en) | 2023-01-17 |
Family
ID=84864343
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211324492.0A Pending CN115619910A (en) | 2022-10-27 | 2022-10-27 | Method and device for realizing puppet animation control by binding animation control nodes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115619910A (en) |
-
2022
- 2022-10-27 CN CN202211324492.0A patent/CN115619910A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109003325B (en) | Three-dimensional reconstruction method, medium, device and computing equipment | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
US8743114B2 (en) | Methods and systems to determine conservative view cell occlusion | |
CN115018992B (en) | Method and device for generating hair style model, electronic equipment and storage medium | |
CN111583381B (en) | Game resource map rendering method and device and electronic equipment | |
CN109979013B (en) | Three-dimensional face mapping method and terminal equipment | |
CN110930503A (en) | Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment | |
Sharma et al. | Point cloud upsampling and normal estimation using deep learning for robust surface reconstruction | |
CN112766215A (en) | Face fusion method and device, electronic equipment and storage medium | |
CN107341841B (en) | Generation method of gradual animation and computing device | |
CN111462205A (en) | Image data deformation and live broadcast method and device, electronic equipment and storage medium | |
CN110619670A (en) | Face interchange method and device, computer equipment and storage medium | |
Aloraibi | Image morphing techniques: A review | |
CN111652025B (en) | Face processing and live broadcasting method and device, electronic equipment and storage medium | |
CN111652807B (en) | Eye adjusting and live broadcasting method and device, electronic equipment and storage medium | |
CN109087250B (en) | Image splicing method based on regular boundary constraint | |
Wang et al. | A novel method for surface mesh smoothing: applications in biomedical modeling | |
CN115619910A (en) | Method and device for realizing puppet animation control by binding animation control nodes | |
CN115082640A (en) | Single image-based 3D face model texture reconstruction method and equipment | |
CN113487713B (en) | Point cloud feature extraction method and device and electronic equipment | |
CN112488909B (en) | Multi-face image processing method, device, equipment and storage medium | |
CN113112596B (en) | Face geometric model extraction and 3D face reconstruction method, equipment and storage medium | |
CN111652023B (en) | Mouth-type adjustment and live broadcast method and device, electronic equipment and storage medium | |
CN111651033B (en) | Face driving display method and device, electronic equipment and storage medium | |
CN111652978A (en) | Grid generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |