CN115439314A - Stylization method, equipment and storage medium - Google Patents

Stylization method, equipment and storage medium Download PDF

Info

Publication number
CN115439314A
CN115439314A CN202211083099.7A CN202211083099A CN115439314A CN 115439314 A CN115439314 A CN 115439314A CN 202211083099 A CN202211083099 A CN 202211083099A CN 115439314 A CN115439314 A CN 115439314A
Authority
CN
China
Prior art keywords
image data
color
contour
original image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211083099.7A
Other languages
Chinese (zh)
Inventor
王传鹏
李腾飞
张婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hard Link Network Technology Co ltd
Original Assignee
Shanghai Hard Link Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hard Link Network Technology Co ltd filed Critical Shanghai Hard Link Network Technology Co ltd
Priority to CN202211083099.7A priority Critical patent/CN115439314A/en
Publication of CN115439314A publication Critical patent/CN115439314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a stylizing method, a stylizing device and a storage medium, wherein the method comprises the following steps: acquiring original image data; detecting the outline of each element in the original image data; detecting contour points in the contour that characterize color variations; constructing a plurality of color blocks in original image data according to the contour points; and filling uniform colors into the color blocks by taking the colors of all elements in the original image data as references to obtain target image data. The color block maintains the approximate contour of each element in the original image data and the approximate color of each element in the original image data at the position of the color block, so that the content of the target image data maintains the approximate texture of the original image data, the resolution of the contour, the color change and the color block is strong, the predictability on the change of the structure and the color of a picture during stylization is realized, and the user can conveniently find the appropriate stylized effect.

Description

Stylization method, equipment and storage medium
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a stylization method, a stylization apparatus, and a storage medium.
Background
In scenes such as social contact, short videos, advertisements and the like, a user can make various large quantities of multimedia files in the format of image data or video data, and after the original multimedia files are generated, the multimedia files are usually subjected to post-processing, so that the personalization of the multimedia files is realized.
At present, if the user is an art designer who changes industry, a professional modeling tool can be used to realize Low-mode in the process of manufacturing the multimedia file, but the technical threshold of the mode is higher, the mode belongs to early-stage manufacturing, and the manufacturing cost is higher.
At present, in order to reduce the technical threshold and cost for realizing the low modulus, image data with similar styles are mostly collected as samples to train a neural network, and the neural network is used for carrying out style migration on a multimedia file, so that the style of a picture of the multimedia file is converted.
However, the neural network has a huge structure, occupies a high resource, is low in stylization speed, has poor resolution, is unpredictable when performing style migration on a structure, a color and the like of a picture, and may need to be repeatedly operated to find a proper migration effect.
Disclosure of Invention
The invention provides a stylizing method, stylizing equipment and a storage medium, which aim to improve the fuzzification speed and predictability of realizing class impression pie on a multimedia file.
According to an aspect of the present invention, there is provided a stylization method, including:
acquiring original image data;
detecting the outline of each element in the original image data;
detecting contour points in the contour that characterize color changes;
constructing a plurality of color blocks in the original image data according to the contour points;
and filling uniform colors on the color blocks by taking the colors of all elements in the original image data as references to obtain target image data.
According to another aspect of the present invention, there is provided a stylizing method comprising:
acquiring original video data with content of introducing games, wherein the original video data comprises multiple frames of original image data;
detecting the outline of each element in the original image data;
detecting contour points in the contour that characterize color changes;
constructing a plurality of color blocks which are adjacent to each other in the original image data according to the contour points;
filling uniform colors on the color blocks by taking the colors of all elements in the original image data as references to obtain target image data;
and replacing the target image data with the original image data in the original video data to obtain target video data.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the stylizing method of any one of the embodiments of the invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing a computer program for causing a processor to implement a stylizing method according to any one of the embodiments of the present invention when executed.
In the present embodiment, raw image data is acquired; detecting the outline of each element in the original image data; detecting contour points in the contour that characterize color changes; constructing a plurality of color blocks in original image data according to the contour points; and filling uniform colors into the color blocks by taking the colors of all elements in the original image data as references to obtain target image data. The method uses the outline and the color change to construct the color block, and the color block is filled with the color by referring to the color of the original image data, so that the Low-modulus Low Polygon style is realized, the method does not depend on professional editing tools, the technical threshold is Low, the method does not depend on a neural network, the operation of the whole process is simple, the calculated amount is Low, the occupation of resources can be greatly reduced, and the stylizing speed is increased, so that the stylizing efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a stylization method according to an embodiment of the present invention;
fig. 2 is an exemplary diagram of a Low-mode Low Polygon style according to an embodiment of the present invention;
FIG. 3 is a flow chart of a stylizing method according to a second embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a stylizing apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a stylizing apparatus according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device for implementing the fifth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a stylization method according to an embodiment of the present invention, where the method is applicable to a case where a color block is constructed according to a change in a contour and a color, and a Low modulus Polygon style is implemented on image data, the method may be executed by a stylization apparatus, the stylization apparatus may be implemented in a form of hardware and/or software, and the stylization apparatus may be configured in an electronic device. As shown in fig. 1, the method includes:
step 101, acquiring original image data.
The Low-mode Low Polygon is also called a Low Polygon, and the Low-mode Low Polygon is a fine mode opposite to the Low-mode Low Polygon, and the fine mode shows that the number of surfaces is large and the details are large, while the number of surfaces is small and the details are small in comparison with the Low-mode Low Polygon.
In an era with Low computer computing power, in order to ensure smooth pictures, in scenes such as game production, especially three-dimensional real-time rendering, scene complexity is usually reduced, the number of polygonal faces is reduced, missing details are compensated by mapping, and a Low-modulus Low Polygon style is gradually formed.
With the development of computer technology and the powerful game engine, a model with rich details can be manufactured, the picture style is more realistic, and the immersion, substitution and user experience are greatly improved.
However, with the popularity of realistic styles, aesthetic fatigue and pursuit of personalization are likely to occur, and styles such as retro pixels and Low-modulus Low Polygon have been gradually started in order to enrich a single picture style.
Furthermore, the Low-modulus Low Polygon style is an artistic style which is consciously cut and stacked by a certain number of simple geometric shapes according to the object modeling rule, pays attention to the high summarization of the large picture effect and the object, the color matching and the light and shadow relationship, and has abstract expression and extreme simplicity.
In this embodiment, the computer program for implementing Low-mode Low Polygon formatting is simple, and therefore, the computer program for implementing Low-mode Low Polygon formatting may be deployed in a server, and provides a Low-mode Low Polygon formatting service for users of a local area network and/or users of a public network in an API (Application Programming Interface) manner, or may be deployed in a client in a plug-in manner, a hard code manner, or the like, so as to provide the Low-mode Low Polygon formatting service for users of the client.
If the computer program for implementing the Low-modulus Low Polygon stylization is deployed in the server, the user may call an API provided by the server in the client, upload the image data to be stylized to the server, or provide a network address of the image data to be stylized to the server, and download the image data to be stylized by the server according to the network address.
If the computer program for implementing the Low-modulus Low Polygon stylization is deployed on the server, the user may select image data located in the local directory and wait for the stylization in the client, or call the camera to acquire a frame of image data and wait for the stylization.
For the sake of distinction, the image data to be stylized is recorded as original image data, which is different in form in different scenes, for example, photographs, posters, avatars, and the like.
Step 102, detecting the outline of each element in the original image data.
In general, the content of the original image data includes a variety of elements, which may be virtual elements such as virtual portrait, virtual lawn, virtual tree, virtual building, etc. in the scenes of making games, making animations, etc., and real elements such as user's head portrait, building, etc. in the scenes of user's self-portrait, recording short video, etc.
In this embodiment, the outline (also called edge) of each element may be detected in the original image data, so as to implement a Low-modulus Low Polygon style for the original image data according to the outline of each element, and retain the basic state of each element.
Further, the contour of each element in the original image data may be detected by using an edge detection operator, such as a Sobel operator, a Prewitt operator, a Canny operator, or may be detected by extracting a mathematical model that can be used from the vision system, such as a Gabor filter, a CORE model, or the like.
The emphasis points of different contour detection manners are different, in this embodiment, the contour detection manner may be set according to the service requirement, and the contours of the detected elements are different in different contour detection manners, which is not limited in this embodiment.
In one embodiment of the present invention, step 102 may include the steps of:
step 1021, calculating the gray value of the original image data to obtain the gray image data.
In general, the original image data is color image data, and in order to increase the processing speed, gray values of pixels of the original image data may be calculated and buffered, and a frame of image data is represented by the gray values of the pixels and is recorded as gray image data.
Exemplarily, the color value of each pixel point in the original image data can be substituted into any one of the following formulas according to the service requirement, and the gray value of the pixel point is output:
(1)、Gray=0.3*R+0.59*G+0.11*B
wherein, gray is the grey scale value, and R is the red component in the colour value, and G is the green component in the colour value, and B is the blue component in the colour value.
The formula is widely applied and can meet the general service requirements, and the formula uses floating point number calculation, so that the calculation amount is large.
(2)、Gray=77*R+150*G+29*B+128
Wherein, gray is the grey scale value, and R is the red component in the colour value, and G is the green component in the colour value, and B is the blue component in the colour value.
The formula adopts integer operation and bit operation, so that the calculation amount can be greatly reduced, and the speed is improved.
Step 1022, calculating a gradient value for the gray image data to obtain gradient image data.
In this embodiment, a gradient value may be calculated for each pixel in the grayscale image data, the gradient value of each pixel represents a frame of image data, and is recorded as gradient image data, and the gradient value represents a speed of change of the original image data, which belongs to edge information of the original image data. For the edge part of the original image data, the gray value change is large, and the gradient value is also large; for smoother portions of the original image data, the gray value variation is small, and the corresponding gradient value is small.
In one approach, a first convolution kernel and a second convolution kernel may be loaded separately, and weights in the first convolution kernel and the second convolution kernel may be amplified to increase differences between pixels.
Illustratively, the first convolution kernel G x A second convolution kernel G y Respectively as follows:
Figure BDA0003834027780000061
a two-dimensional first convolution operation is performed on the grayscale image data in the horizontal direction (x direction) using a first convolution kernel, resulting in a gradient value in the horizontal direction.
A second convolution operation is performed on the gradation image data in the vertical direction (y direction) using a second convolution kernel, resulting in a gradient value in the vertical direction.
And fusing the gradient values in the horizontal direction and the vertical direction into gradient values of the gray image data to obtain the gradient image data.
For example, the gradient image data is obtained by summing the absolute value of the gradient values in the horizontal direction and the absolute value of the gradient values in the vertical direction as the gradient values of the gradation image data.
For another example, the sum of the square of the gradient values in the horizontal direction and the square of the gradient values in the vertical direction is squared to obtain gradient image data as the gradient values of the gradation image data.
Step 1023 performs a normalization process on the gradient values in the gradient image data.
In this embodiment, normalization processing (Normalization) may be performed on the gradient values of the pixels in the gradient image data in a min-max (minimum-maximum) manner, so that the scales of the gradient values of the pixels are adjusted to a similar range, which is convenient for calculation.
And 1024, taking an absolute value of the gradient value after the normalization processing in the gradient image data to obtain the outline of each element.
In this embodiment, the absolute value of the gradient value after normalization processing of each pixel point in the gradient image data is taken, and a frame of image data is represented by the absolute value of the gradient of each pixel point and is recorded as the contour of each element.
Step 103, detecting contour points representing color changes in the contour.
In the original image data, pixel points with obvious color change are searched along the contour (image data) and are marked as contour points.
The contour points with obvious color change can be used as the basis for reducing details in the Low-modulus Low Polygon, so that the color change rule in the original image data is still met to a certain extent after the details are reduced, and the distortion degree is reduced.
In one embodiment of the present invention, step 103 may comprise the steps of:
and step 1031, calculating a histogram for the gray-scale image data.
In this embodiment, the gray scale image data corresponding to the original image data may be searched in the cache, where the gray scale image data includes the gray scale value of each pixel point in the original image data.
For gray image data, a histogram can be calculated, and then the histogram describes the gray distribution condition in the original image data, can intuitively show the proportion of each gray level in the original image data, describes the number of pixel points of each gray level, but does not contain the position information of the pixel points in the original image data, so that the histogram is not influenced by the rotation and translation change of the original image data and can be used as the characteristic of the original image data.
In general, the histogram is represented in a coordinate manner, the abscissa is a gray level, and the ordinate is a probability of occurrence of the gray level.
Step 1032, calculating the cumulative distribution probability for the histogram.
In this embodiment, a Cumulative Distribution probability may be calculated for the histogram using a Cumulative Distribution Function (Cumulative Distribution Function).
The cumulative distribution function is also called a distribution function, is an integral of the probability density function, and can completely describe the probability distribution of a real random variable X (namely, a histogram).
In the process of searching contour points, in order to improve the accuracy of detecting color changes and increase the contrast of original image data, two conditions are ensured for this purpose:
1. no matter how mapped, the original size relation of the pixel points is kept unchanged, and the brighter area is still brighter and the darker area is still darker.
2. If the number of bits of the original image data is eight bits, the value range of the mapping of the pixel points should be between 0 and 255.
By combining the above two conditions, the cumulative distribution function is a monotone increasing function (control magnitude relation), and the range is 0 to 1 (control out-of-range problem), and the above two conditions are met.
Step 1033, mapping the original image data to reference image data according to the cumulative distribution probability.
Because the original image data is composed of the pixel points, the similarity of the original image data can be equalized through the solution of the discrete cumulative distribution function, and therefore the contrast of the original image data is improved.
In the process, a function for mapping the color values of the pixels can be generated according to the cumulative distribution probability and recorded as a mapping relation, so that the color values (such as red components, green components and blue components) of the channels of the pixels in the original image data are mapped according to the mapping relation, and the mapped channels are combined by using a merge () method to obtain reference image data.
Illustratively, the mapping relationship is as follows:
Figure BDA0003834027780000081
wherein s is k The value of the current gray level after the cumulative distribution function mapping is indicated, n is the sum of pixel points in the original image data, n j Number of pixels being current gray levelK =0, 1, 2, \8230;, L-1,l is the total number of gray levels in the gray image data.
In this embodiment, the histogram is used to equalize the colors of the original image data, so as to improve the contrast of the original image data in color, which converts the original image data into an image having the same pixel points at each gray level (i.e., the output histogram is flat), and which can generate an image with balanced gray level distribution probability.
In one example, let the color values of the original image data in a certain channel be as follows:
255 128 200 50
50 200 255 50
255 200 128 128
200 200 255 50
the statistical information obtained is as follows and mapped:
Figure BDA0003834027780000082
Figure BDA0003834027780000091
taking 50 as an example, the calculation method of the mapped color value is as follows: 0.25 × 255-0) =63.75
Then, the mapping of the color values of the map in the original image data is followed as follows:
255 112 191 64
64 191 255 64
255 191 112 112
191 191 255 64
and 1034, selecting pixel points with color values meeting preset change conditions along the contour in the reference image data as contour points for representing color change.
And traversing each pixel point in the reference image data, so as to select the pixel point with the color value meeting the preset change condition (such as the difference value between the color values of the adjacent pixel points is larger than a certain threshold value) along the contour as the contour point representing the color change.
Further, a target value, such as 50000, may be preset, and the target value may be a default empirical value, or a stylized intensity may be set by a user and mapped to the target value, which is not limited in this embodiment.
Then, in the reference image data, pixel points which meet a preset change condition and are in the number of the target value are randomly selected along the contour selection color value to serve as contour points representing color change.
If the target value is larger and the number of contour points is larger, the less detail is deleted when realizing the style of Low-modulus Low Polygon, and the closer to the original image data is.
If the smaller the target value, the fewer the number of contour points, the more detail is subtracted when implementing the style of Low-modulus Low Polygon, the further away from the original image data.
In another embodiment of the present invention, step 103 may further include the steps of:
any two contour points are constructed as point pairs, step 1035.
In the present embodiment, for any contour point, any other contour point may be constructed as a point pair.
Step 1036, for any pair of points, calculating the distance separating the contour points in the pair of points.
For any point pair, the distance between two contour points in the point pair can be calculated by means of Euclidean distance and the like.
Step 1037, if the distance is greater than or equal to the preset first threshold, the contour points in the point pairs are retained.
Step 1038, if the distance is smaller than the preset first threshold, filtering out any contour point in the point pair.
In this embodiment, the distance may be compared with a preset first threshold.
If the distance is greater than or equal to the first threshold, it indicates that the distance is greater, i.e. the distance between two contour points in the point pair is greater, and the style of Low-modulus Low Polygon is stronger, the contour points in the point pair can be retained.
If the distance is smaller than the first threshold, it means that the distance is small, i.e. the distance between two contour points in the pair is small, and the style of Low-modulus Low Polygon is weak, any contour point in the pair can be deleted.
And 104, constructing a plurality of color blocks in the original image data according to the contour points.
In this embodiment, for all contour points in the original image data, at least three contour points are connected according to the stylized specification of Low modulus Polygon to construct a color block, which is generally a convex Polygon and is rendered by rendering engines such as OpenGL (Open Graphics Library), openGL ES (OpenGL for Embedded Systems ), and the like.
Further, the color blocks are ordered to form a sequence, so that the rendering specification of the rendering engines such as OpenGL, openGL ES and the like is met.
In a specific implementation, contour points may be traversed in the original image data, three adjacent contour points are connected, and a plurality of color patches which are triangular in shape, and the triangles are adjacent to each other and do not overlap with each other are constructed.
Wherein, the circumscribed circle of each color block does not contain other contour points, and a Thiessen polygon (Dirichlet diagram, also called Voronoi diagram) corresponding to the contour point as the vertex of the color block has a common vertex which is the center of the circumscribed circle of the color block.
In the Thiessen polygon, the Euclidean distance between any two contour points p and q is denoted as dist (p, q).
Let P = { P 1 ,p 2 ,…,p n And the points are any n different contour points on the plane, and the contour points are base points. So-called P-corresponding Voronoi diagrams are a subdivision of a plane-the entire plane is thus divided into n cells, which have the property that:
any contour point q is located at a contour point p i In the corresponding cell, if and only if for any p j ∈P j J ≠ i, all have dist (q, p) i )<dist(q,p j ). At this time, a Voronoi diagram corresponding to P is denoted as Vor (P).
The "Vor (P)" or "Voronoi diagram" indicates the edges and vertices that make up the sub-region partition. In Vor (P), with a base point P i The corresponding cell is denoted V (p) i ) -so called and p i Corresponding Voronoi cells.
For different rendering engines, the way of constructing color blocks is also different, for example, for OpenGL, there are generally three color blocks for drawing a series of triangles:
1、GL_TRIANGLES
every three contour points are grouped to draw a triangle, and the triangles are independent.
2、GL_TRIANGLE_STRIP
Starting from the third contour point, each point in combination with the first two contour points draws a triangle, i.e. a string of linearly consecutive triangles.
This order is to ensure that the triangles are drawn in the same direction so that the sequence of triangles can correctly form part of the surface.
3、GL_TRIANGLE_FAN。
Starting from the third contour point, each point in combination with the previous contour point and the first contour point draws a triangle, i.e. a fan-shaped continuous triangle.
In an embodiment of the present invention, the above method for constructing color blocks further includes the following steps:
and step 1041, determining a base line.
In this embodiment, the color blocks may be constructed in a multiple recursion manner, and at least one baseline is determined in each recursion.
The baseline is initially a connecting line between any contour point in the original image data and another contour point closest to the contour point, that is, initially, one contour point is arbitrarily found from the acquired discrete contour points, the contour point closest to the contour point (e.g., euclidean distance) is searched, and the two contour points are connected to serve as the initial baseline.
And 1042, searching a contour point which is closest to the base line on the right side of the base line.
On the basis of determining the base line, the distance between any contour point and the base line, namely the distance when the contour point is vertically projected to the base line, is calculated on the right side of the base line, and the distance between any contour point and the base line is compared, so that the contour point which is closest to the base line is searched.
Step 1043, connecting the two contour points on the baseline with the found contour points, respectively, to obtain a color block with a triangular shape.
If the contour point closest to the base line is found in the recursion, two contour points on the base line can be respectively connected with the contour points found in the recursion to obtain a color block with a triangular shape.
Step 1044, judging whether all contour points in the original image data are traversed; if yes, go to step 1045, otherwise go to step 1046.
And step 1045, outputting the color block.
Step 1046, taking the connecting line containing the searched contour point as a new baseline, and returning to execute step 1041.
In each recursion, if the construction of the color blocks is finished, whether all contour points in the original image data are traversed or not can be judged, if all contour points in the original image data are traversed, all constructed color blocks can be output, if all contour points in the original image data are not traversed, a connecting line of the contour points searched in the recursion and two contour points on the base line can be set as a new base line, and the recursion is carried out next time until all contour points in the original image data are traversed.
And 105, filling uniform colors into the color blocks by taking the colors of all elements in the original image data as reference to obtain target image data.
In a specific implementation, the color patch is not filled with color initially, and at this time, as shown in fig. 2, color may be filled in each color patch by taking the color of each element in the original image data as a reference, so as to generate target image data, so that the color patch maintains the approximate outline of each element in the original image data, and also maintains the approximate color of each element in the original image data at the position where the color patch is located, so that the content of the target image data maintains the approximate texture of the original image data, and also realizes the style of Low-modulus Low Polygon.
Further, the same color block is filled with a uniform color, and the colors filled in different color blocks may be the same or different.
In one embodiment of the present invention, step 105 may include the steps of:
and 1051, searching pixel points which are positioned in the color blocks and represent each element in the original image data aiming at each color block.
In this embodiment, all color patches are mapped back to the original image data, so that the pixels located in the color patches can be queried in the original image data, and the pixels represent each element in the original image data.
And step 1052, clustering the pixel points in the color blocks according to the color values of the pixel points to obtain a plurality of candidate clusters.
In each color block, the color value of the pixel point can be used as the characteristic of the pixel point, so that the pixel points are clustered according to the color value of the pixel point to obtain a plurality of clusters, and the clusters are recorded as candidate clusters.
In one embodiment of the present invention, step 1052 may further include the steps of:
step 10521 initializes a plurality of candidate clusters.
In clustering, a plurality of clusters may be initialized as candidate clusters, and the number of the candidate clusters may be a default empirical value, where each candidate cluster has a center point, and the center point may be initially set randomly, or a point as far as possible from each other may be selected as the center point, or a color value of a pixel point is first clustered by using a hierarchical clustering algorithm or a Canopy algorithm, and after a plurality of reference clusters are obtained, a point is selected from each reference cluster as the center point, where the center point may be the center of the reference cluster or the point closest to the center of the reference cluster, and so on, which is not limited in this embodiment.
Step 10522, calculate the difference between the color value of the pixel and the center point.
In each round of clustering, each pixel point is regarded as a point in one candidate cluster, and the difference (i.e., distance) between the point and the center point is calculated by using the characteristics (i.e., color values) of the pixel points, such as euclidean distance, cosine distance, and the like.
And 10523, drawing the pixel points into the central point with the minimum difference.
For a given pixel point, the difference (i.e., distance) between the pixel point and the center point of each candidate cluster may be compared, and the candidate cluster with the smallest difference (i.e., distance) is selected as the candidate cluster to which the pixel point belongs, so that the pixel point is classified into the candidate cluster with the smallest difference (i.e., distance).
Step 10524, in each candidate cluster, calculate the average value of the color values of all the pixel points as the new center point.
After each pixel point is re-divided into candidate clusters, each candidate cluster comprises a plurality of pixel points, an average value of color values of the pixel points is calculated, the average value is assigned to the center point of the candidate cluster, and therefore the center point of the candidate cluster is updated.
Step 10525, determining whether the variation amplitude of the central point is less than or equal to a preset second threshold; if yes, go to step 10526, otherwise, go back to steps 10522-10525.
Step 10526, determine candidate cluster convergence.
For the same candidate cluster, the difference between the center point before updating and the center point after updating can be calculated as the change amplitude during updating, and the change amplitude during updating is compared with a preset second threshold value.
If the change amplitude during updating is smaller than or equal to the second threshold, the change amplitude representing the updating of the central point is smaller, the candidate cluster can be confirmed to be converged, and clustering is completed.
If the change amplitude during updating is larger than the second threshold value, which indicates that the change amplitude of the central point updating is larger, it can be determined that the candidate cluster is not converged, the next round of clustering is entered, and steps 10522-10525 are re-executed until the candidate cluster is converged.
Step 1053, selecting one candidate cluster from the plurality of candidate clusters as a target cluster.
In this embodiment, one of the candidate clusters may be selected from the multiple candidate clusters according to the requirement of the service, and the selected candidate cluster is marked as a target cluster.
In general, the candidate cluster with the highest number of pixels is selected as the target cluster, so that the color value filled in the color block is closer to the color value in the original image data.
Of course, if the difference between the color values of the color patches and the color patches of the adjacent color patches is considered, the candidate cluster including the highest number of pixels may also be selected as the target cluster under the condition that the color values of the color patches and the color patches of the adjacent color patches are different, which is not limited in this embodiment.
And 1054, filling the color value represented by the target cluster into the color block to obtain target image data.
And for the target cluster, color values represented by the target cluster can be filled in color blocks to realize coloring of the color blocks, and target image data is generated after each color block is colored.
In general, a color value represented by the central point of the target cluster may be filled in the color patch, so as to obtain target image data.
In the case where the shape of a patch of target image data is a triangle or the like, acceleration can be performed using a rendering engine, and then, when drawing a patch, not only geometric coordinates (i.e., vertex coordinates) but also texture coordinates are defined for each vertex (i.e., contour point). After various transformations, the geometric coordinates determine the position on the screen where the vertex is drawn, and the texture coordinates determine which texel in the texture image is assigned to the vertex.
Texture images are square arrays, the texture coordinates can be usually defined in one, two, three or four dimensional form, called s, t, r and q coordinates, one dimensional texture is usually expressed in s coordinates, two dimensional texture is usually expressed in (s, t) coordinates, and r coordinates are ignored at present. The q coordinate, like w, is typically 1 and is used primarily to establish homogeneous coordinates. The function defined by the OpenGL coordinates is:
void gltexCoord{1234}{sifd}[v](TYPE coords)
the current texture coordinates are set, and the vertices resulting from the call to glVertex () are all assigned the current texture coordinates. For gltexCoord1 (, s coordinate is set to a given value, t and r are set to 0, q is set to 1; s and t coordinate values can be set with gltexCoord2 (), r is set to 0, and q is set to 1; for gltexCoord3 (, q is set to 1, and the other coordinates are set at given values; all coordinates can be given with gltexCoord4 ().
In this embodiment, openGL ES is taken as an example to explain a process of drawing color blocks, where the process is a programmable pipeline, and specifically includes the following operations:
1. VBO/VAO (Vertex Buffer/Arrays Objects, vertex Buffer object or Vertex group object)
VBO/VAO is vertex information provided by the CPU to the GPU, including vertex coordinates, color (only the color of the vertex, independent of the color of the texture), texture coordinates (for texture mapping), and the like.
2. Vertexshader (vertex shader)
The vertex shader is a program that processes vertex information provided by the VBO/VAO. Each vertex provided by VBO/VAO performs one pass of the vertex shader. Uniformity (a variable type) remains consistent across each vertex, and Attribute varies across vertices (which can be understood as an input vertex Attribute). Executing VertexShader once outputs one Varying (variable) and gl _ positon.
Wherein, the input of the vertex shader comprises:
2.1, shader program: vertex shader program source code or executable file describing operations performed on vertices
2.2, vertex shader input (or attributes): data for each vertex provided by a vertex array
2.3, uniform variable (uniform): invariant data used by vertex/fragment shaders
2.4, samplers (Samplers): special uniform variable types for representing vertex shader usage textures
The vertex shader is a stage in which the programming of the vertex shader can be operated, and is used for controlling the conversion process of vertex coordinates, and the fragment shader controls the calculation process of each pixel color.
3. Primitive Assembly (Primitive Assembly):
the next stage of the vertex shader is primitive assembly, and a primitive (speculative) is a geometric object such as a triangle, a straight line or a point. At this stage, the vertices output by the vertex shader are grouped into primitives.
Restoring the vertex data into a grid structure according to a Primitive (original link relation), wherein the grid is composed of vertexes and indexes, the vertexes are linked together according to the indexes at the stage to form three different primitives of points, lines and surfaces, and then the triangles beyond the screen are clipped.
For example, if a triangle (mesh) has three vertices, one of which is outside the screen and two of which are inside the screen, and a quadrangle is actually seen on the screen, the quadrangle can be cut into 2 small triangles (meshes).
In short, the points obtained after the vertex shader computation are grouped into points, lines, and planes (triangles) according to the link relationship.
4. rasterization (rasterization)
Rasterization is the process of converting a primitive into a set of two-dimensional fragments, which are then processed by a fragment shader (the input to the fragment shader). These two-dimensional fragments represent pixels that can be drawn on the screen, and the mechanism for generating each fragment value from the vertex shader output assigned to each primitive vertex is called interpolation.
The vertices of the assembled primitives can be understood as becoming graphics, and pixels (texture coordinates v _ texCoord, color and other information) in the graphics area can be interpolated according to the shape of the graphics during rasterization. Note that the pixel at this time is not a pixel on the screen, and is not colored. The next fragment shader completes the coloring.
5. Fragmentshader (fragment shader)
The fragment shader implements a generic programmable approach for operations on fragments (pixels), executing the fragment shader once for each fragment of the rasterized output, executing the shader on each fragment generated during the rasterization phase, and generating one or more (multiple rendered) color values as output.
6. Per-Fragment Operations (Fragment by Fragment operation)
At this stage, each segment will perform the following 5 operations:
6.1 PixelOwnershiptest (Pixel attribution test)
The pixel that determines the location (x, y) in the frame buffer is not owned by the current context.
For example, if one display frame buffer window is occluded by another window, the window system may determine that the occluded pixels do not belong to the context of this OpenGL, and thus do not display them.
6.2, scissorTest (cut test):
if the segment is outside the cropping zone, it is discarded.
6.3, stencilTest and DepthSt (template and depth test):
if the shape returned by the fragment shader is not a shape in the stencil, it is discarded.
If the depth returned by the fragment shader is less than the depth in the buffer, then it is discarded.
6.4 Blending:
the newly generated fragment color values are combined with the color values stored in the frame buffer to generate new RGBA (Red, green, blue, and Alpha color spaces).
6.5, dithering:
at the end of the fragment-by-fragment operation phase, the fragment is either rejected or the color, depth or template value of the fragment is written somewhere in the frame buffer (x, y). The write fragment color, depth, and stencil values depend on the respective write mask being discarded. The write mask may more precisely control the color, depth, and stencil values written into the associated buffer. For example: the write mask for the color buffer may be set so that any red value cannot be written to the color buffer.
And finally, putting the generated fragments into a Frame Buffer area (a front Buffer area or a rear Buffer area or an FBO (Frame Buffer Object)), and if the fragments are not the FBO, drawing the fragments in the Buffer area by the screen to generate pixel points on the screen.
In the present embodiment, raw image data is acquired; detecting the outline of each element in the original image data; detecting contour points in the contour that characterize color variations; constructing a plurality of color blocks in original image data according to contour points; and filling uniform colors into the color blocks by taking the colors of all elements in the original image data as reference to obtain target image data. The method uses the outline and the color change to construct the color block, and the color block is filled with the color by referring to the color of the original image data, so that the Low-modulus Low Polygon style is realized, the method does not depend on professional editing tools, the technical threshold is Low, the method does not depend on a neural network, the operation of the whole process is simple, the calculated amount is Low, the occupation of resources can be greatly reduced, and the stylizing speed is increased, so that the stylizing efficiency is improved.
Example two
Fig. 3 is a flowchart of a stylization method according to a second embodiment of the present invention, where this embodiment is applicable to a case where a color block is constructed according to a change in a contour and a color, and a Low modulus Low Polygon style is implemented on video data, the method may be executed by a stylization apparatus, the stylization apparatus may be implemented in a form of hardware and/or software, and the stylization apparatus may be configured in an electronic device. As shown in fig. 3, the method includes:
step 301, obtaining the original video data of the content which is the introduction game.
In this embodiment, the computer program for implementing Low-mode Low Polygon formatting is simple, and therefore, the computer program for implementing Low-mode Low Polygon formatting may be deployed in a server, and provides a Low-mode Low Polygon formatting service for users of a local area network and/or users of a public network in an API (Application Programming Interface) manner, or may be deployed in a client in a plug-in manner, a hard code manner, or the like, so as to provide the Low-mode Low Polygon formatting service for users of the client.
If the computer program for implementing the stylization of the Low-modulus Low Polygon is deployed at the server, the user may call an API provided by the server in the client, upload the video data to be stylized to the server, or provide a network address of the video data to be stylized to the server, and download the video data to be stylized by the server according to the network address.
If the computer program for realizing the stylization of the Low-modulus Low Polygon is deployed at the server, the user can select video data located in a local directory and wait for the stylization at the client, or call a camera to acquire the video data and wait for the stylization.
In order to facilitate the distinction, the video data to be stylized is recorded as original video data, and the content of the original video data is mainly used for introducing games, so that the games are popularized.
The type of the Game may include MOBA (Multiplayer Online Battle Arena), RPG (Role-playing Game), SLG (Simulation Game), and the like, which is not limited in this embodiment.
Further, the content of the original video data can be divided into two main forms of game content and real scenarios, wherein the game content can be introduced for the process of controlling the game by the user, can also be introduced for the speaker, and can also be introduced for the speaker wearing the clothes in the game, and the scenarios can be further divided into the following categories:
1. pseudo-cate sharing
The original video data contains some gourmet materials, attracts the attention of the user, and is embedded into a playing method for playing games and eating gourmet.
2. Close to the life subject of the user
The content of the original video data is close to the current living state of the user, and the game is implanted to the aspects of the life, such as playing games, eating, buying snacks and the like. The first half of the material mainly takes 2-person conversation as the main part, and the second half of the material is the implanted segment of the game.
3. Exaggeration situation drama
The original video data contains material of situation dramas, some of which are exaggerated to attract the attention of users.
Of course, the original video data is only an example, and when the embodiment is implemented, other original video data may be set according to practical situations, which is not limited by the embodiment. In addition, besides the original video data, those skilled in the art may also use other original video data according to actual needs, and this embodiment is not limited to this.
Step 302, detecting the outline of each element in the original image data.
In a specific implementation, the original video data has multiple frames of image data, which are recorded as original image data, and in each frame of image data, the outlines of the elements can be detected.
When detecting the contour, calculating a gray value of the original image data to obtain gray image data; calculating gradient values of the gray image data to obtain gradient image data; performing normalization processing on the gradient values in the gradient image data; and taking an absolute value of the gradient value after the normalization processing in the gradient image data to obtain the outline of each element.
Further, when the gradient value is calculated, a first convolution kernel and a second convolution kernel are respectively loaded; performing a first convolution operation on the gray image data in the horizontal direction by using a first convolution kernel to obtain a gradient value in the horizontal direction; performing a second convolution operation on the gray image data in the vertical direction by using a second convolution kernel to obtain a gradient value in the vertical direction; and fusing the gradient values in the horizontal direction and the vertical direction into gradient values of the gray image data to obtain the gradient image data.
Step 303, contour points representing color variations are detected in the contour.
When detecting the contour points, calculating a histogram for gray image data, wherein the gray image data comprises gray values of original image data; calculating cumulative distribution probability for the histogram; mapping the original image data into reference image data according to the cumulative distribution probability; and in the reference image data, selecting pixel points with color values meeting preset change conditions along the contour as contour points representing color change.
In addition, before the contour points are screened out, any two contour points can be constructed into point pairs; for any point pair, calculating the distance between contour points in the point pair; if the distance is greater than or equal to a preset first threshold value, retaining the contour points in the point pairs; if the distance is smaller than a preset first threshold value, filtering any contour point in the point pair.
There will be some fluctuation in the detected contour points between different frames of original image data, and these fluctuation will cause the color block to have more obvious flicker.
In a smoothing mode, contour points with the same position are respectively searched in original image data of two adjacent frames, if the number of the contour points is determined and the numbers are configured, the contour points with the same number are the contour points with the same position, or if the position of the contour point of the previous frame is determined, the contour point with the nearest distance to the position is searched in the next frame and is determined as the contour point with the same position, and the like.
And aiming at contour points with the same positions, linearly fusing the adjusted contour points in the previous frame of original image data and the original contour points in the next frame of original image data to obtain the adjusted contour points in the next frame of original image data.
That is, a first weight is configured for the position of the adjusted contour point in the previous frame of original image data, and a second weight is configured for the position of the original contour point in the next frame of original image data, wherein the second weight is greater than the first weight, so that the position of the adjusted contour point in the next frame of original image data is biased to the position of the original contour point in the next frame of original image data.
And calculating a first product between the position of the adjusted contour point in the original image data of the previous frame and the first weight, calculating a second product between the position of the original contour point in the original image data of the next frame and the second weight, and adding the first product and the second product to obtain the adjusted contour point in the original image data of the next frame.
And step 304, constructing a plurality of color blocks which are adjacent to each other in the original image data according to the contour points.
When constructing color blocks, traversing contour points in original image data, connecting three adjacent contour points, and constructing a plurality of color blocks which are triangular, mutually adjacent and not mutually overlapped; the external circle of each color block does not contain other contour points, the Thiessen polygon corresponding to the contour point as the vertex of the color block has a common vertex, and the common vertex is the center of the external circle of the color block.
Further, a baseline can be determined, wherein the baseline is initially a connecting line between any contour point in the original image data and other contour points closest to the contour point; searching a contour point which is closest to the base line on the right side of the base line; respectively connecting the two contour points on the base line with the searched contour points to obtain color blocks in the shape of a triangle; judging whether all contour points in the original image data are traversed or not; if yes, outputting the color block; if not, the connecting line containing the searched contour points is used as a new baseline, and the baseline is returned to be determined.
And 305, filling uniform colors into the color blocks by taking the colors of all elements in the original image data as reference to obtain target image data.
When color is filled, aiming at each color block, searching pixel points which are positioned in the color block and represent each element in the original image data; in the color blocks, clustering the pixel points according to the color values of the pixel points to obtain a plurality of candidate clusters; selecting one candidate cluster from the candidate clusters as a target cluster; and filling the color value represented by the target cluster into the color block to obtain target image data.
During clustering, initializing a plurality of candidate clusters, wherein the candidate clusters have central points; calculating the difference between the color value of the pixel point and the central point; marking the pixel point into a central point with the minimum difference value; in each candidate cluster, calculating the average value of the color values of all the pixel points as a new central point; judging whether the variation amplitude of the central point is smaller than or equal to a preset second threshold value or not; if yes, determining candidate cluster convergence; if not, returning to execute the calculation of the difference value between the color value of the pixel point and the central point.
Correspondingly, when color is filled, the color value represented by the central point of the target cluster is filled in the color block to obtain target image data.
In this embodiment, since the implementation of the Low-modulus Low Polygon style for the raw image data of a single frame is basically similar to the application of the first embodiment, the description is relatively simple, and the relevant points may be referred to the partial description of the first embodiment, which is not described in detail herein.
And step 306, replacing the target image data with the original image data in the original video data to obtain the target video data.
In the original video data, the target image data may be substituted for the corresponding original image data to obtain the target video data.
Thereafter, advertisement element data related to the game is added to the target video data as advertisement video data; the method comprises the steps that advertisement video data are published in a designated channel (such as news information, short videos, novel reading, sports health and the like) so that when a client accesses the channel, the advertisement video data are pushed to the client to be played, and when a user is interested in a target game, the target game is downloaded from a game distribution platform.
Further, the advertisement element data may include an icon (Logo), banner information (Banner), an Ending Card (EC), and the like.
The icon Logo is a mark of the business object, and may be a text icon Logo (including a name of the business object) or a graphic icon Logo.
The Banner information Banner is generally rectangular information, usually located at the top and/or bottom of the image data, and can record information of the business object itself (such as a picture in a game, a character in a game, and a name), and information attracting a user to purchase or download the business object (such as a gift code).
The ending section EC has an identifier of downloading the service object, for example, information of the service object itself (such as a picture, a character, and a name in a game), and a purchase or download manner of the service object (such as an icon of an application distribution platform, a name and an icon of the application distribution platform, a name and an icon of a shopping platform, and the like).
In this embodiment, the acquired content is original video data of an introduction game, and the original video data includes multiple frames of original image data; detecting the outline of each element in the original image data; detecting contour points in the contour that characterize color changes; constructing a plurality of color blocks which are adjacent to each other in original image data according to contour points; filling uniform colors into the color blocks by taking the colors of all elements in the original image data as references to obtain target image data; and replacing the original image data with the target image data in the original video data to obtain the target video data. The method uses the outline and the color change to construct the color block, and refers to the color of the original image data to fill and paint the color block with the color, so as to realize the Low-modulus Low Polygon style, and in the process, the method does not depend on a professional editing tool, has Low technical threshold and a neural network, is simple in operation in the whole process, has Low calculated amount, can greatly reduce the occupation of resources, and improves the stylizing speed, thereby improving the stylizing efficiency.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a stylizing apparatus according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes:
an original image data obtaining module 401, configured to obtain original image data;
a contour detection module 402, configured to detect contours of respective elements in the original image data;
a contour point detection module 403 for detecting contour points representing color variations in the contour;
a color block constructing module 404, configured to construct a plurality of color blocks in the original image data according to the contour points;
and a color filling module 405, configured to fill a uniform color to the color block with reference to the color of each element in the original image data, so as to obtain target image data.
In an embodiment of the present invention, the contour detection module 402 is further configured to:
calculating a gray value of the original image data to obtain gray image data;
calculating gradient values of the gray image data to obtain gradient image data;
performing a normalization process on the gradient values in the gradient image data;
and taking an absolute value of the gradient value after the normalization processing in the gradient image data to obtain the outline of each element.
In an embodiment of the present invention, the contour detection module 402 is further configured to:
respectively loading a first convolution kernel and a second convolution kernel;
performing a first convolution operation on the gray image data in a horizontal direction by using the first convolution kernel to obtain a gradient value in the horizontal direction;
performing a second convolution operation on the gray image data in the vertical direction by using the second convolution kernel to obtain a gradient value in the vertical direction;
and fusing the gradient values in the horizontal direction and the vertical direction into gradient values of the gray image data to obtain gradient image data.
In an embodiment of the present invention, the contour point detection module 403 is further configured to:
calculating a histogram for gray scale image data, the gray scale image data including a gray scale value of the original image data;
calculating a cumulative distribution probability for the histogram;
mapping the original image data into reference image data according to the cumulative distribution probability;
and selecting pixel points with color values meeting preset change conditions along the contour in the reference image data as contour points for representing color change.
In an embodiment of the present invention, the contour point detection module 403 is further configured to:
constructing any two contour points into point pairs;
for any of the point pairs, calculating a distance separating the contour points in the point pair;
if the distance is greater than or equal to a preset first threshold value, the contour points in the point pairs are reserved;
and if the distance is smaller than a preset first threshold value, filtering any contour point in the point pair.
In an embodiment of the present invention, the color block constructing module 404 is further configured to:
traversing the contour points in the original image data, connecting three adjacent contour points, and constructing a plurality of color blocks which are triangles and are mutually adjacent and not overlapped;
the external circle of each color block does not contain other contour points, the Thiessen polygon corresponding to the contour point as the vertex of the color block has a common vertex, and the common vertex is the center of the external circle of the color block.
In an embodiment of the present invention, the color block constructing module 404 is further configured to:
determining a baseline, wherein the baseline is initially a connecting line between any contour point in the original image data and other contour points closest to the contour point;
searching the contour point closest to the base line on the right side of the base line;
respectively connecting the two contour points on the base line with the searched contour points to obtain a color block with a triangular shape;
judging whether all the contour points in the original image data are traversed or not; if yes, outputting the color block; if not, the connecting line containing the searched contour point is used as a new baseline, and the determined baseline is returned to be executed.
In one embodiment of the present invention, the color filling module 405 is further configured to:
for each color block, searching pixel points which are positioned in the color block and represent each element in the original image data;
in the color blocks, clustering the pixel points according to the color values of the pixel points to obtain a plurality of candidate clusters;
selecting one of the candidate clusters from the plurality of candidate clusters as a target cluster;
and filling the color value represented by the target cluster into the color block to obtain target image data.
In one embodiment of the present invention, the color filling module 405 is further configured to:
the clustering the pixel points according to the color values of the pixel points to obtain a plurality of candidate clusters comprises the following steps:
initializing a plurality of candidate clusters, the candidate clusters having a center point;
calculating the difference between the color value of the pixel point and the central point;
drawing the pixel point into the central point with the minimum difference value;
in each candidate cluster, calculating the average value of the color values of all the pixel points as a new central point;
judging whether the variation amplitude of the central point is less than or equal to a preset second threshold value or not; if yes, determining that the candidate cluster is converged; if not, returning to execute the calculation of the difference value between the color value of the pixel point and the central point.
In one embodiment of the present invention, the color fill module 405 is further configured to:
and filling the color value represented by the central point of the target cluster into the color block to obtain target image data.
The stylization device provided by the embodiment of the invention can execute the stylization method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the stylization method.
Example four
Fig. 5 is a schematic structural diagram of a stylizing apparatus according to a fourth embodiment of the present invention. As shown in fig. 5, the apparatus includes:
an original video data obtaining module 501, configured to obtain original video data with content of an introduction game, where the original video data includes multiple frames of original image data;
a contour detection module 502, configured to detect contours of respective elements in the original image data;
a contour point detection module 503 for detecting contour points representing color variations in the contour;
a color block construction module 504, configured to construct a plurality of mutually adjacent color blocks in the original image data according to the contour points;
a color filling module 505, configured to fill uniform colors to the color blocks with reference to colors of each element in the original image data, to obtain target image data;
a target video data generating module 506, configured to replace the original image data with the target image data in the original video data to obtain target video data.
In one embodiment of the present invention, further comprising:
the contour point searching module is used for respectively searching the contour points with the same position in the original image data of two adjacent frames;
and the contour point smoothing module is used for carrying out linear fusion on the contour point which is adjusted in the original image data of the previous frame and the contour point which is original in the original image data of the next frame aiming at the contour point with the same position to obtain the contour point which is adjusted in the original image data of the next frame.
In one embodiment of the present invention, further comprising:
the advertisement video data generation module is used for adding advertisement element data related to the game in the target video data to serve as advertisement video data;
and the advertisement video data publishing module is used for publishing the advertisement video data in a specified channel so as to push the advertisement video data to the client for playing when the client accesses the channel.
The stylization device provided by the embodiment of the invention can execute the stylization method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the stylization method.
EXAMPLE five
FIG. 6 illustrates a block diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as the stylized method.
In some embodiments, the stylization method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the stylization method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the stylized method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
EXAMPLE six
Embodiments of the present invention also provide a computer program product, which includes a computer program that, when executed by a processor, implements a stylizing method as provided in any of the embodiments of the present invention.
Computer program product in implementing the computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A stylization method, comprising:
acquiring original image data;
detecting the outline of each element in the original image data;
detecting contour points in the contour that characterize color changes;
constructing a plurality of color blocks in the original image data according to the contour points;
and filling uniform colors on the color blocks by taking the colors of all elements in the original image data as reference to obtain target image data.
2. The method of claim 1, wherein detecting the contour of each element in the raw image data comprises:
calculating a gray value of the original image data to obtain gray image data;
calculating a gradient value of the gray image data to obtain gradient image data;
performing a normalization process on the gradient values in the gradient image data;
and taking an absolute value of the gradient value after the normalization processing in the gradient image data to obtain the outline of each element.
3. The method of claim 2, wherein said computing gradient values for said grayscale image data resulting in gradient image data comprises:
respectively loading a first convolution kernel and a second convolution kernel;
performing a first convolution operation on the gray image data in a horizontal direction by using the first convolution kernel to obtain a gradient value in the horizontal direction;
performing a second convolution operation on the gray image data in the vertical direction by using the second convolution kernel to obtain a gradient value in the vertical direction;
and fusing the gradient values in the horizontal direction and the vertical direction into gradient values of the gray image data to obtain gradient image data.
4. The method of claim 1, wherein said detecting contour points in said contour that characterize color changes comprises:
calculating a histogram for gray scale image data, the gray scale image data comprising gray scale values of the original image data;
calculating a cumulative distribution probability for the histogram;
mapping the original image data into reference image data according to the cumulative distribution probability;
and selecting pixel points with color values meeting preset change conditions along the contour in the reference image data as contour points for representing color change.
5. The method of claim 4, wherein said detecting contour points in said contour that characterize color changes further comprises:
constructing any two contour points into point pairs;
for any of the pairs of points, calculating a distance separating the contour points in the pair of points;
if the distance is greater than or equal to a preset first threshold value, the contour points in the point pairs are reserved;
and if the distance is smaller than a preset first threshold value, filtering any contour point in the point pairs.
6. The method of claim 1, wherein constructing a plurality of color patches from the contour points in the raw image data comprises:
traversing the contour points in the original image data, connecting three adjacent contour points, and constructing a plurality of color blocks which are triangles and are mutually adjacent and not overlapped;
and the circumscribed circle of each color block does not contain other contour points, and the Thiessen polygon corresponding to the contour points as the vertexes of the color block has a common vertex which is the center of the circumscribed circle of the color block.
7. The method of claim 6, wherein said traversing said contour points in said raw image data, connecting three adjacent contour points, constructing a plurality of color patches having shapes of triangles, said triangles being adjacent to each other and not overlapping each other, comprises:
determining a baseline, wherein the baseline is initially a connecting line between any contour point in the original image data and other contour points closest to the contour point;
searching the contour point closest to the base line on the right side of the base line;
connecting the two contour points on the base line with the searched contour points respectively to obtain color blocks in a triangular shape;
judging whether all the contour points in the original image data are traversed or not; if yes, outputting the color block; if not, the connecting line containing the searched contour point is used as a new baseline, and the determined baseline is returned to be executed.
8. The method according to any one of claims 1 to 7, wherein the filling the color block with uniform color by using the color of each element in the original image data as a reference to obtain target image data comprises:
for each color block, searching pixel points which are positioned in the color block and represent each element in the original image data;
in the color blocks, clustering the pixel points according to the color values of the pixel points to obtain a plurality of candidate clusters;
selecting one of the candidate clusters from a plurality of the candidate clusters as a target cluster;
and filling the color value represented by the target cluster into the color block to obtain target image data.
9. The method of claim 8,
the clustering the pixel points according to the color values of the pixel points to obtain a plurality of candidate clusters comprises the following steps:
initializing a plurality of candidate clusters, the candidate clusters having a center point;
calculating the difference between the color value of the pixel point and the central point;
drawing the pixel point into the central point with the minimum difference value;
in each candidate cluster, calculating the average value of the color values of all the pixel points as a new central point;
judging whether the variation amplitude of the central point is less than or equal to a preset second threshold value or not; if yes, determining that the candidate cluster is converged; if not, returning to execute the calculation of the difference value between the color value of the pixel point and the central point;
and filling the color value represented by the target cluster into the color block to obtain target image data, wherein the method comprises the following steps:
and filling the color value represented by the central point of the target cluster into the color block to obtain target image data.
10. A stylization method, comprising:
acquiring original video data with content of introducing games, wherein the original video data comprises multiple frames of original image data;
detecting the outline of each element in the original image data;
detecting contour points in the contour that characterize color changes;
constructing a plurality of color blocks which are adjacent to each other in the original image data according to the contour points;
filling uniform colors on the color blocks by taking the colors of all elements in the original image data as references to obtain target image data;
and replacing the target image data with the original image data in the original video data to obtain target video data.
11. The method of claim 10, further comprising:
respectively searching the contour points with the same position in the original image data of two adjacent frames;
and aiming at the contour points with the same positions, linearly fusing the adjusted contour points in the original image data of the previous frame with the original contour points in the original image data of the next frame to obtain the adjusted contour points in the original image data of the next frame.
12. The method of claim 10 or 11, further comprising:
adding advertisement element data related to the game in the target video data as advertisement video data;
and releasing the advertisement video data in a specified channel so as to push the advertisement video data to the client for playing when the client accesses the channel.
13. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the stylization method of any one of claims 1-12.
14. A computer-readable storage medium, characterized in that a computer program is stored for causing a processor, when executed, to carry out the stylizing method of any one of claims 1-12.
CN202211083099.7A 2022-09-06 2022-09-06 Stylization method, equipment and storage medium Pending CN115439314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211083099.7A CN115439314A (en) 2022-09-06 2022-09-06 Stylization method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211083099.7A CN115439314A (en) 2022-09-06 2022-09-06 Stylization method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115439314A true CN115439314A (en) 2022-12-06

Family

ID=84246552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211083099.7A Pending CN115439314A (en) 2022-09-06 2022-09-06 Stylization method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115439314A (en)

Similar Documents

Publication Publication Date Title
CN110084874B (en) Image style migration for three-dimensional models
CN105321177B (en) A kind of level atlas based on image importance pieces method together automatically
US10650524B2 (en) Designing effective inter-pixel information flow for natural image matting
Argudo et al. Single-picture reconstruction and rendering of trees for plausible vegetation synthesis
CN111652791B (en) Face replacement display method, face replacement live broadcast device, electronic equipment and storage medium
US8633942B2 (en) View generation using interpolated values
CN115100334B (en) Image edge tracing and image animation method, device and storage medium
EP4287134A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
Maxim et al. A survey on the current state of the art on deep learning 3D reconstruction
Chi et al. Stylized and abstract painterly rendering system using a multiscale segmented sphere hierarchy
Ye [Retracted] Application of Photoshop Graphics and Image Processing in the Field of Animation
CN115439314A (en) Stylization method, equipment and storage medium
CN111652024B (en) Face display and live broadcast method and device, electronic equipment and storage medium
González et al. Simplification method for textured polygonal meshes based on structural appearance
Zhang et al. Neural Modelling of Flower Bas‐relief from 2D Line Drawing
Zhao et al. A pencil drawing algorithm based on wavelet transform multiscale
CN111651033A (en) Driving display method and device for human face, electronic equipment and storage medium
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
Liu et al. Animating characters in Chinese painting using two-dimensional skeleton-based deformation
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
US20240221317A1 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
Zeng et al. 3D plants reconstruction based on point cloud
US20240161320A1 (en) Generating adaptive three-dimensional meshes of two-dimensional images
US20240161406A1 (en) Modifying two-dimensional images utilizing iterative three-dimensional meshes of the two-dimensional images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination