CN110728738A - Image stream animation rendering method and system based on local self-adaption - Google Patents

Image stream animation rendering method and system based on local self-adaption Download PDF

Info

Publication number
CN110728738A
CN110728738A CN201910975378.6A CN201910975378A CN110728738A CN 110728738 A CN110728738 A CN 110728738A CN 201910975378 A CN201910975378 A CN 201910975378A CN 110728738 A CN110728738 A CN 110728738A
Authority
CN
China
Prior art keywords
rendering
animation
action point
image
thinning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910975378.6A
Other languages
Chinese (zh)
Other versions
CN110728738B (en
Inventor
樊伟富
张金矿
熊永春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaoying Innovation Technology Co ltd
Original Assignee
HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co Ltd filed Critical HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co Ltd
Priority to CN201910975378.6A priority Critical patent/CN110728738B/en
Publication of CN110728738A publication Critical patent/CN110728738A/en
Application granted granted Critical
Publication of CN110728738B publication Critical patent/CN110728738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an image flowing picture rendering method based on local self-adaptation, which comprises the following steps: s1, generating an action point set based on the user input; s2, constructing a grid model based on the action point set; and S3, performing animation rendering on the grid model. The invention utilizes a self-adaptive method to carry out detail processing on the curve, and retains the key characteristic vertex expected by the user to the maximum extent. And the fluency of the flow animation is improved by carrying out detail processing on the local animation.

Description

Image stream animation rendering method and system based on local self-adaption
Technical Field
The invention relates to the field of image processing, in particular to an image stream animation rendering method and system based on local self-adaptation.
Background
The image flow animation technology is to convert lines and anchor points drawn on an image by a user into an animation vector model so as to render the lines and the anchor points into an image dynamic effect. Using this technique for picture editing, users can have a high degree of freedom in trial authoring to produce desired animation styles. With such a highly playable interesting editing function, the entertaining nature of the editing function will greatly increase the user's creation enthusiasm.
At present, some dynamic picture editing software mainly based on similar technologies exists, and dynamic creation requirements of most users can be met. However, short boards mostly exist, rendering is slightly hard for animation processing of local user curves, for example, a local circular curve cannot form a circular flowing effect, and user requirements with high requirements for details of animation local animation cannot be met.
The invention patent with publication number CN106898036A discloses an image data processing method and a mobile terminal, which obtains image data to be processed containing target object information; acquiring a trigger signal for representing image dynamic processing: and performing dynamic processing on a preset area of a target object in the image data to be processed according to the trigger signal. After the trigger signal for representing the dynamic processing is obtained, the preset area of the target object in the image data to be processed is subjected to dynamic processing, so that other image data elements are added on the basis of original image data in the prior art. Although the above patent can realize the dynamic processing of the preset area, it still cannot form a loop flow effect or the like, and the local dynamic effect is poor.
Therefore, aiming at the defects of the prior art, how to realize the image stream animation with good local animation detail effect is a problem to be solved urgently in the field.
Disclosure of Invention
The invention aims to provide an image stream animation rendering method and system based on local self-adaptation aiming at the defects of the prior art. And (3) carrying out detail processing on the curve by using a self-adaptive method, and reserving key feature vertexes expected by a user to the maximum extent. And the fluency of the flow animation is improved by carrying out detail processing on the local animation.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for rendering an image flowing picture based on local self-adaptation comprises the following steps:
s1, generating an action point set based on the user input;
s2, constructing a grid model based on the action point set;
and S3, performing animation rendering on the grid model.
Further, the user input includes a fixed point, a geometric line segment, a curve.
Further, step S1 includes:
and adding the fixed points and the geometric line segments into the action point set.
Further, step S1 includes:
s2.1, performing preliminary thinning on the curve by adopting a small step length;
s2.2, simplifying the preliminary thinning result according to the curvature value;
s2.3, adopting the average sub-line segment length as the step length, and resampling the simplified rarefaction result;
and S2.4, adding the resampled line segment data into the action point set.
Further, step S2 includes:
calculating the network topology structure of the action point set and calculating the vertex data of each action point.
Further, the network topology of the action point set is calculated by adopting Delaunay triangulation.
Further, calculating the vertex data of each action point includes calculating vertex coordinates, texture coordinates, and animation offsets, and specifically includes:
vertex coordinates being coordinates of action points
Texture coordinate-action point coordinate/image size
Animation offset is the flow value/image size of the action point.
Further, step S3 includes:
s3.1, initializing rendering equipment;
s3.2, loading the image texture to a graphic processor;
s3.3, constructing a shader according to different animation modes;
and S3.4, updating the animation weight variable of the shader, drawing the dynamic grid frame by frame based on the grid model, and outputting a rendering result.
Correspondingly, an image flowing picture rendering system based on local self-adaptation is also provided, which comprises:
the preprocessing module is used for acquiring an action point set based on user input;
a construction module for constructing a mesh model based on the set of action points;
and the rendering module is used for performing animation rendering on the grid model.
Further, the preprocessing module comprises:
the preliminary thinning module is used for preliminarily thinning the curve by adopting a small step length;
a simplification module for simplifying the preliminary thinning result according to a curvature value;
the resampling module is used for resampling the simplified thinning result by adopting the average sub-line segment length as a step length;
and the set generation module is used for adding the resampled line segment data into the action point set.
According to the image flow animation rendering method and system based on local self-adaptation, provided by the invention, the local animation is subjected to detail processing, so that the curve animation of a local user can be smoothly processed and rendered, a smooth flowing effect is formed, the rendering performance is improved, and the high-performance requirement of the user on the flow animation is met. And vertex thinning is carried out on the curve input by the user by using a curvature self-adaptive algorithm, and for input curves with different lengths and different bending degrees, the vertex of the key feature expected by the user can be reserved to the greatest extent. The method adopts Delaunay triangulation to construct a mesh model to construct vertex data, and can construct the triangular mesh with high performance while keeping good robustness on any input data.
Drawings
FIG. 1 is a flowchart of an image flowcharting method based on local adaptive rendering according to an embodiment;
fig. 2 is a structural diagram of an image flowcharting rendering system based on local adaptation according to a second embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Example one
As shown in fig. 1, the present embodiment provides an image flowing picture rendering method based on local adaptive, including:
s1, generating an action point set based on the user input;
the data input by the user comprises elements such as fixed points, geometric line segments and curves, and the fixed points and the geometric line segments can be directly used as action points, so that the fixed points and the geometric line segments are added into the action point set, and only the curve data is processed. The origin of the geometric line segment represents the action point coordinates, the self vector indicates the flow value, the self coordinates in the point element represent the action point coordinates, and the flow value is null.
The purpose of data processing is to reserve certain characteristic points for the point-line elements input by a user and ensure that the lengths of all the line segments on the curve are equal. The method specifically comprises the following steps:
s2.1, performing preliminary thinning on the curve by adopting a small step length;
the curve data is essentially a multi-line segment expressed by a large number of user touch points, and when vectorized data is processed, a lot of repeated data often exist in records, which brings great inconvenience to further data processing. The redundant data wastes more storage space on the one hand and causes the graphic to be expressed to be unsmooth or not to meet the standard on the other hand. Therefore, the number of data points is reduced to the maximum by some rule under the condition of ensuring that the shape of the vector curve is not changed, and the process is called thinning.
The key of curve thinning is to define a thinning factor, and the diversity of the thinning method determined by the difference of the thinning factor. The present invention does not limit the method of thinning, and optionally, the present invention performs thinning by using step length as a thinning factor. The step method is to extract one point at a certain step length along a continuous curve, compress all the other points, and then fit and approach the adjacent extracted points by straight lines or curves.
In the process of adopting step-length thinning, the characteristic points on the curve, such as the curve corners, and the points with larger curve change are compressed due to thinning to cause curve deformation, therefore, the invention firstly adopts small step-length to primarily thin the curve, adds the points extracted according to the small step-length into the thinning result 1 to restore the shape of the curve as much as possible, and avoids the points with larger curve change due to overlarge step-length from being compressed due to thinning.
The thinning result 1 is step resampling (curve data, small step).
S2.2, simplifying the preliminary thinning result according to the curvature value;
after the preliminary thinning is carried out by adopting the small step length, part of redundant points can still be left to be deleted, if a section of curve is relatively straight, and the step length is small, a plurality of extraction points can be formed on the section of straight line, and the head and tail points of the straight line section can be actually reserved. Thus, the present invention simplifies the vertex of the preliminary thinning by curvature value.
And setting a curvature threshold, comparing the curvature of the vertex in the initial thinning result with a preset curvature threshold, and rejecting the vertex when the curvature of the vertex is smaller than the curvature threshold so as to obtain a simplified thinning result. And deleting redundant thinning vertexes by eliminating vertexes with small curvatures, so as to optimize the thinning result.
The thinning result 2 is the curvature vertex simplification (thinning result 1, curvature threshold).
S2.3, adopting the average sub-line segment length as the step length, and resampling the simplified rarefaction result;
in order to make the action points on one curve have equal moving picture amplitude, the invention performs thinning again on the simplified thinning result, and uses the average sub-line segment length as the step length for resampling on the curve. Furthermore, in order to avoid overlarge sampling step length, the maximum curve distance value is preset, and the sampling step length is MIN (L)max,Lx) Where Lmax represents a preset maximum curve distance value, and Lx represents an average sub-line segment length. The maximum curve distance value is the maximum flow amplitude distance value on the curve, and the average sub-line segment length is the average length of each line segment on the broken line.
Thinning result 3 ═ step resampling (thinning result 2, MIN (L)max,Lx))。
And S2.4, adding the resampled line segment data into the action point set.
After the thinning is carried out, the curve is divided into a plurality of line segments, and like the processing of geometric line segments input by a user, the line segment data in the thinning result is added into the action point set, the starting point in line segment elements represents the coordinate of the action point, and the self vector represents the flow value.
S2, constructing a grid model based on the action point set;
the mesh model is constructed for the purpose of dynamically rendering the preliminary mesh data, and specifically comprises the steps of calculating a network topology structure of an action point set and calculating vertex data of each action point.
The invention adopts Delaunay triangulation to calculate the network topology structure of the action point set, and obtains the index sequence of the vertex of the triangular mesh by performing two-dimensional Delaunay triangulation on the vertex data of the whole action point set.
Definition of the Delaunay triangulation rule: suppose V is a set of action points, edge E is a closed line segment composed of action points in the set of action points as end points, and E is a set of E. A triangulation T ═ (V, E) of the set of action points V is then a plan G which satisfies the condition:
1. an edge in the plan view does not contain any point in the set of points, except for the end points.
2. There are no intersecting edges.
3. All the faces in the plan view are triangular faces, and the collection of all the triangular faces is the convex hull of the scatter set V.
Suppose an edge E (two endpoints are a, b) in E, and E is called a Delaunay edge if the following conditions are satisfied: there is a circle passing through two points a and b, and there is no other point in the circle (note that in the circle, at most three points on the circle are in a common circle) in the action point set V, which is also called a null circle characteristic. If a triangulation T of the set of action points V contains only Delaunay edges, the triangulation is referred to as a Delaunay triangulation. Assuming that T is any triangulation of V, then T is a Delaunay triangulation of V, only if the inside of the circumscribed circle of each triangle in T currently contains no points in V.
Because the Delaunay triangulation has the characteristics of maximized minimum angle, triangulation closest to regularization, uniqueness and the like, the method can generate a more natural triangular mesh which accords with the user behavior expectation in the topology construction process through the Delaunay triangulation.
Calculating the vertex data of each action point comprises calculating vertex coordinates, texture coordinates and animation offset, and specifically comprises the following steps:
vertex coordinates being coordinates of action points
Texture coordinate-action point coordinate/image size
Animation offset is the flow value/image size of the action point.
And S3, performing animation rendering on the grid model.
After the grid model is constructed based on the input of the user, animation rendering is carried out on the grid model by the method, so that the image flow animation is obtained. The specific process is as follows:
s3.1, initializing rendering equipment;
initializing the rendering device specifically includes preparing the hardware rendering environment, preparing the frame buffer, and preparing for the rendering process.
S3.2, loading the image texture to a graphic processor;
the method loads the picture selected by the user and pushes the picture to a Graphic Processing Unit (GPU) for texture display, and loads the picture selected by the user and pushes the picture to the GPU for texture display.
S3.3, constructing a shader according to different animation modes;
shaders (shaders) mainly include a Vertex Shader (Vertex Shader) and a pixel Shader (PixelShader). A vertex shader is a set of instruction code that is executed when a vertex is rendered. When rendering a vertex, the API executes instructions in the vertex shader to control each vertex, including rendering, determining the location, and whether to display on the screen. The pixel shader is also a set of instructions that are executed when pixels in vertices are rendered. At each execution time, many pixels will be rendered. One pixel shader operates on individual pixels on vertices. Like the vertex shader, the pixel shader source code is loaded into the hardware through some APIs.
The pixel shader can realize sampling after applying the offset by using a specific instruction to achieve the effect of picture flowing, therefore, the invention utilizes the pixel shader to apply the sampling offset related to the animation weight before texture sampling by using an animation weight variable, thereby achieving the animation effect changing along with the weight parameter. It should be noted that the present invention is not limited to a specific shader, and the corresponding shader may be constructed according to a specific animation mode.
And S3.4, updating the animation weight variable of the shader, drawing the dynamic grid frame by frame based on the grid model, and outputting a rendering result.
After the user switches the animation mode, a new shader program is created and used, and the input parameters are updated. And drawing the dynamic grids frame by frame based on the grid model through the created shader so as to generate corresponding rendering results, wherein the rendering results are stored in a frame cache. Frame-by-frame results are exported into other media files by exporting frame buffer data to output rendering results.
Example two
As shown in fig. 2, the present embodiment provides an image flowing picture rendering system based on local adaptive, including:
the preprocessing module is used for acquiring an action point set based on user input;
the data input by the user comprises elements such as fixed points, geometric line segments and curves, and the fixed points and the geometric line segments can be directly used as action points, so that the fixed points and the geometric line segments are added into the action point set, and only the curve data is processed. The origin of the geometric line segment represents the action point coordinates, the self vector indicates the flow value, the self coordinates in the point element represent the action point coordinates, and the flow value is null.
The purpose of data processing is to reserve certain characteristic points for the point-line elements input by a user and ensure that the lengths of all the line segments on the curve are equal. The method specifically comprises the following steps:
the preliminary thinning module is used for preliminarily thinning the curve by adopting a small step length;
the curve data is essentially a multi-line segment expressed by a large number of user touch points, and when vectorized data is processed, a lot of repeated data often exist in records, which brings great inconvenience to further data processing. The redundant data wastes more storage space on the one hand and causes the graphic to be expressed to be unsmooth or not to meet the standard on the other hand. Therefore, the number of data points is reduced to the maximum by some rule under the condition of ensuring that the shape of the vector curve is not changed, and the process is called thinning.
The key of curve thinning is to define a thinning factor, and the diversity of the thinning method determined by the difference of the thinning factor. The present invention does not limit the method of thinning, and optionally, the present invention performs thinning by using step length as a thinning factor. The step method is to extract one point at a certain step length along a continuous curve, compress all the other points, and then fit and approach the adjacent extracted points by straight lines or curves.
In the process of adopting step-length thinning, the characteristic points on the curve, such as the curve corners, and the points with larger curve change are compressed due to thinning to cause curve deformation, therefore, the invention firstly adopts small step-length to primarily thin the curve, adds the points extracted according to the small step-length into the thinning result 1 to restore the shape of the curve as much as possible, and avoids the points with larger curve change due to overlarge step-length from being compressed due to thinning.
The thinning result 1 is step resampling (curve data, small step).
A simplification module for simplifying the preliminary thinning result according to a curvature value;
after the preliminary thinning is carried out by adopting the small step length, part of redundant points can still be left to be deleted, if a section of curve is relatively straight, and the step length is small, a plurality of extraction points can be formed on the section of straight line, and the head and tail points of the straight line section can be actually reserved. Thus, the present invention simplifies the vertex of the preliminary thinning by curvature value.
And setting a curvature threshold, comparing the curvature of the vertex in the initial thinning result with a preset curvature threshold, and rejecting the vertex when the curvature of the vertex is smaller than the curvature threshold so as to obtain a simplified thinning result. And deleting redundant thinning vertexes by eliminating vertexes with small curvatures, so as to optimize the thinning result.
The thinning result 2 is the curvature vertex simplification (thinning result 1, curvature threshold).
The resampling module is used for resampling the simplified thinning result by adopting the average sub-line segment length as a step length;
in order to make the action points on one curve have equal moving picture amplitude, the invention performs thinning again on the simplified thinning result, and uses the average sub-line segment length as the step length for resampling on the curve. Furthermore, in order to avoid overlarge sampling step length, the maximum curve distance value is preset, and the sampling step length is MIN (L)max,Lx) Where Lmax represents a preset maximum curve distance value, and Lx represents an average sub-line segment length. The maximum curve distance value is the maximum flow amplitude distance value on the curve, and the average sub-line segment length is the average length of each line segment on the broken line.
Thinning result 3 ═ step resampling (thinning result 2, MIN (L)max,Lx))。
And the set generation module is used for adding the resampled line segment data into the action point set.
After the thinning is carried out, the curve is divided into a plurality of line segments, and like the processing of geometric line segments input by a user, the line segment data in the thinning result is added into the action point set, the starting point in line segment elements represents the coordinate of the action point, and the self vector represents the flow value.
A construction module for constructing a mesh model based on the set of action points;
the mesh model is constructed for the purpose of dynamically rendering the preliminary mesh data, and specifically comprises the steps of calculating a network topology structure of an action point set and calculating vertex data of each action point.
The invention adopts Delaunay triangulation to calculate the network topology structure of the action point set, and obtains the index sequence of the vertex of the triangular mesh by performing two-dimensional Delaunay triangulation on the vertex data of the whole action point set.
Definition of the Delaunay triangulation rule: suppose V is a set of action points, edge E is a closed line segment composed of action points in the set of action points as end points, and E is a set of E. A triangulation T ═ (V, E) of the set of action points V is then a plan G which satisfies the condition:
1. an edge in the plan view does not contain any point in the set of points, except for the end points.
2. There are no intersecting edges.
3. All the faces in the plan view are triangular faces, and the collection of all the triangular faces is the convex hull of the scatter set V.
Suppose an edge E (two endpoints are a, b) in E, and E is called a Delaunay edge if the following conditions are satisfied: there is a circle passing through two points a and b, and there is no other point in the circle (note that in the circle, at most three points on the circle are in a common circle) in the action point set V, which is also called a null circle characteristic. If a triangulation T of the set of action points V contains only Delaunay edges, the triangulation is referred to as a Delaunay triangulation. Assuming that T is any triangulation of V, then T is a Delaunay triangulation of V, only if the inside of the circumscribed circle of each triangle in T currently contains no points in V.
Because the Delaunay triangulation has the characteristics of maximized minimum angle, triangulation closest to regularization, uniqueness and the like, the method can generate a more natural triangular mesh which accords with the user behavior expectation in the topology construction process through the Delaunay triangulation.
Calculating the vertex data of each action point comprises calculating vertex coordinates, texture coordinates and animation offset, and specifically comprises the following steps:
vertex coordinates being coordinates of action points
Texture coordinate-action point coordinate/image size
Animation offset is the flow value/image size of the action point.
And the rendering module is used for performing animation rendering on the grid model.
After the grid model is constructed based on the input of the user, animation rendering is carried out on the grid model by the method, so that the image flow animation is obtained. The method specifically comprises the following steps:
an initialization module for initializing a rendering device;
initializing the rendering device specifically includes preparing the hardware rendering environment, preparing the frame buffer, and preparing for the rendering process.
The texture loading module is used for loading the image texture to the graphics processor;
the method loads the picture selected by the user and pushes the picture to a Graphic Processing Unit (GPU) for texture display, and loads the picture selected by the user and pushes the picture to the GPU for texture display.
The shader building module is used for building shaders according to different animation modes;
shaders (shaders) mainly include a Vertex Shader (Vertex Shader) and a pixel Shader (PixelShader). A vertex shader is a set of instruction code that is executed when a vertex is rendered. When rendering a vertex, the API executes instructions in the vertex shader to control each vertex, including rendering, determining the location, and whether to display on the screen. The pixel shader is also a set of instructions that are executed when pixels in vertices are rendered. At each execution time, many pixels will be rendered. One pixel shader operates on individual pixels on vertices. Like the vertex shader, the pixel shader source code is loaded into the hardware through some APIs.
The pixel shader can realize sampling after applying the offset by using a specific instruction to achieve the effect of picture flowing, therefore, the invention utilizes the pixel shader to apply the sampling offset related to the animation weight before texture sampling by using an animation weight variable, thereby achieving the animation effect changing along with the weight parameter. It should be noted that the present invention is not limited to a specific shader, and the corresponding shader may be constructed according to a specific animation mode.
And the drawing module is used for updating the animation weight variable of the shader, drawing the dynamic grid frame by frame based on the grid model and outputting a rendering result.
After the user switches the animation mode, a new shader program is created and used, and the input parameters are updated. And drawing the dynamic grids frame by frame based on the grid model through the created shader so as to generate corresponding rendering results, wherein the rendering results are stored in a frame cache. Frame-by-frame results are exported into other media files by exporting frame buffer data to output rendering results.
Therefore, the image flow animation rendering method and system based on local self-adaptation, which are provided by the invention, can be used for carrying out detail processing on local animation, so that the processing and rendering of local user curve animation are smooth, a smooth flowing effect is formed, the rendering performance is improved, and the high-performance requirement of a user on the flow animation is met. And vertex thinning is carried out on the curve input by the user by using a curvature self-adaptive algorithm, and for input curves with different lengths and different bending degrees, the vertex of the key feature expected by the user can be reserved to the greatest extent. The method adopts Delaunay triangulation to construct a mesh model to construct vertex data, and can construct the triangular mesh with high performance while keeping good robustness on any input data.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for rendering an image flowing picture based on local self-adaptation is characterized by comprising the following steps:
s1, generating an action point set based on the user input;
s2, constructing a grid model based on the action point set;
and S3, performing animation rendering on the grid model.
2. The image flowcharting rendering method of claim 1, wherein the user input comprises a fixed point, a geometric line segment, a curve.
3. The image flowing picture rendering method according to claim 2, wherein the step S1 includes: and adding the fixed points and the geometric line segments into the action point set.
4. The image flowing picture rendering method according to claim 2, wherein the step S1 includes:
s2.1, performing preliminary thinning on the curve by adopting a small step length;
s2.2, simplifying the preliminary thinning result according to the curvature value;
s2.3, adopting the average sub-line segment length as the step length, and resampling the simplified rarefaction result;
and S2.4, adding the resampled line segment data into the action point set.
5. The image flowing picture rendering method according to claim 4, wherein the step S2 includes: calculating the network topology structure of the action point set and calculating the vertex data of each action point.
6. The method for animation rendering of an image stream according to claim 5, wherein a network topology of the set of action points is computed using Delaunay triangulation.
7. The method of claim 5, wherein the computing vertex data for each action point comprises computing vertex coordinates, texture coordinates, and animation offsets, and specifically comprises:
vertex coordinates being coordinates of action points
Texture coordinate-action point coordinate/image size
Animation offset is the flow value/image size of the action point.
8. The image stream animation rendering method according to claim 5, wherein the step S3 includes:
s3.1, initializing rendering equipment;
s3.2, loading the image texture to a graphic processor;
s3.3, constructing a shader according to different animation modes;
and S3.4, updating the animation weight variable of the shader, drawing the dynamic grid frame by frame based on the grid model, and outputting a rendering result.
9. An image flowing picture rendering system based on local self-adaption, which is used for the image flowing picture rendering method of any one of claims 1-8, and is characterized by comprising the following steps:
the preprocessing module is used for acquiring an action point set based on user input;
a construction module for constructing a mesh model based on the set of action points;
and the rendering module is used for performing animation rendering on the grid model.
10. The image-flow rendering system of claim 9, wherein the pre-processing module comprises:
the preliminary thinning module is used for preliminarily thinning the curve by adopting a small step length;
a simplification module for simplifying the preliminary thinning result according to a curvature value;
the resampling module is used for resampling the simplified thinning result by adopting the average sub-line segment length as a step length;
and the set generation module is used for adding the resampled line segment data into the action point set.
CN201910975378.6A 2019-10-14 2019-10-14 Image stream animation rendering method and system based on local self-adaption Active CN110728738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910975378.6A CN110728738B (en) 2019-10-14 2019-10-14 Image stream animation rendering method and system based on local self-adaption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910975378.6A CN110728738B (en) 2019-10-14 2019-10-14 Image stream animation rendering method and system based on local self-adaption

Publications (2)

Publication Number Publication Date
CN110728738A true CN110728738A (en) 2020-01-24
CN110728738B CN110728738B (en) 2023-05-12

Family

ID=69221217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910975378.6A Active CN110728738B (en) 2019-10-14 2019-10-14 Image stream animation rendering method and system based on local self-adaption

Country Status (1)

Country Link
CN (1) CN110728738B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509902B1 (en) * 2000-02-28 2003-01-21 Mitsubishi Electric Research Laboratories, Inc. Texture filtering for surface elements
US6580425B1 (en) * 2000-02-28 2003-06-17 Mitsubishi Electric Research Laboratories, Inc. Hierarchical data structures for surface elements
CN1996392A (en) * 2006-08-14 2007-07-11 东南大学 Figure reconstruction method in 3D scanning system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509902B1 (en) * 2000-02-28 2003-01-21 Mitsubishi Electric Research Laboratories, Inc. Texture filtering for surface elements
US6580425B1 (en) * 2000-02-28 2003-06-17 Mitsubishi Electric Research Laboratories, Inc. Hierarchical data structures for surface elements
CN1996392A (en) * 2006-08-14 2007-07-11 东南大学 Figure reconstruction method in 3D scanning system

Also Published As

Publication number Publication date
CN110728738B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US11232534B2 (en) Scheme for compressing vertex shader output parameters
CN108351864B (en) Concave geometric dense paving
US8217962B2 (en) Single-pass bounding box calculation
US8743114B2 (en) Methods and systems to determine conservative view cell occlusion
US8725466B2 (en) System and method for hybrid solid and surface modeling for computer-aided design environments
US20100091018A1 (en) Rendering Detailed Animated Three Dimensional Characters with Coarse Mesh Instancing and Determining Tesselation Levels for Varying Character Crowd Density
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
CN114820906B (en) Image rendering method and device, electronic equipment and storage medium
US7750914B2 (en) Subdividing geometry images in graphics hardware
US8294713B1 (en) Method and apparatus for illuminating objects in 3-D computer graphics
KR20080018404A (en) Computer readable recording medium having background making program for making game
US9147288B1 (en) Subdivision of surfaces approximation
CN107633544B (en) Processing method and device for ambient light shielding
CN103700134A (en) Three-dimensional vector model real-time shadow deferred shading method based on controllable texture baking
KR100959349B1 (en) A method for accelerating terrain rendering based on quadtree using graphics processing unit
CN115018992A (en) Method and device for generating hair style model, electronic equipment and storage medium
CN109697748B (en) Model compression processing method, model mapping processing method, model compression processing device, and storage medium
JP3350473B2 (en) Three-dimensional graphics drawing apparatus and method for performing occlusion culling
Ryder et al. Survey of real‐time rendering techniques for crowds
CN110728738A (en) Image stream animation rendering method and system based on local self-adaption
JP5864474B2 (en) Image processing apparatus and image processing method for processing graphics by dividing space
JP2003228725A (en) 3d image processing system
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
JPH10162161A (en) Efficient rendering that uses user-defined room and window
US10636210B2 (en) Dynamic contour volume deformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 22nd floor, block a, Huaxing Times Square, 478 Wensan Road, Xihu District, Hangzhou, Zhejiang 310000

Patentee after: Hangzhou Xiaoying Innovation Technology Co.,Ltd.

Address before: 16 / F, HANGGANG Metallurgical Science and technology building, 294 Tianmushan Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Patentee before: HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address