CN116402980A - Virtual fluff generation method, device, equipment, medium and product - Google Patents

Virtual fluff generation method, device, equipment, medium and product Download PDF

Info

Publication number
CN116402980A
CN116402980A CN202111633028.5A CN202111633028A CN116402980A CN 116402980 A CN116402980 A CN 116402980A CN 202111633028 A CN202111633028 A CN 202111633028A CN 116402980 A CN116402980 A CN 116402980A
Authority
CN
China
Prior art keywords
virtual
fluff
distribution
feature points
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111633028.5A
Other languages
Chinese (zh)
Inventor
林伟锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111633028.5A priority Critical patent/CN116402980A/en
Priority to PCT/CN2022/139558 priority patent/WO2023125071A1/en
Publication of CN116402980A publication Critical patent/CN116402980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The present disclosure provides a virtual fluff generating method, device, equipment, medium and product, and relates to the technical field of computers, where the method includes: acquiring attribute information of virtual pile configuration of a user for a virtual prop, wherein the attribute information comprises the types and the density degrees of the virtual piles, determining the distribution of characteristic points in a preset image according to the types of the virtual piles, and determining the number of the characteristic points in the preset image according to the density degrees of the virtual piles; then, a noise image is generated based on the distribution of the feature points and the number of feature points, and virtual naps of the virtual prop are generated based on the noise image. Therefore, the user can quickly obtain the preview effect of the virtual fluff of the virtual prop by configuring the attribute information, so that the user operation is simplified, the generation efficiency of the virtual fluff is improved, and the user experience is improved.

Description

Virtual fluff generation method, device, equipment, medium and product
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a virtual fluff generating method, apparatus, device, computer readable storage medium, and computer program product.
Background
To increase engagement, many applications provide some props. Such as augmented reality (augmented reality, AR) based game applications, short video applications, virtual props such as headgear, gloves, clothing, etc. may typically be provided. The virtual props have virtual villi, and the popularity of the virtual props can be improved by increasing the effect of the virtual villi, so that the application heat and the liveness are improved.
At present, in the process of designing virtual piles of a virtual prop, a user usually needs to manually design each virtual pile of the virtual prop, preview the virtual pile of the virtual prop, and then repeatedly and manually adjust each virtual pile, so that the virtual pile of the virtual prop achieves a better presentation effect.
However, repeatedly manually adjusting each virtual pile increases user operations, increases complexity of designing virtual piles having virtual props, and further decreases efficiency of virtual pile generation.
Disclosure of Invention
The purpose of the present disclosure is to: a virtual fluff generating method, apparatus, device, computer-readable storage medium, and computer program product are capable of simplifying user operations and improving virtual fluff generating efficiency.
In a first aspect, the present disclosure provides a virtual fluff generating method, including:
acquiring attribute information of virtual fluff configuration of a user aiming at a virtual prop; the attribute information comprises the type and the density degree of the virtual fluff;
determining the distribution of feature points in a preset image according to the types of the virtual villi; determining the number of feature points in a preset image according to the density degree of the virtual fluff;
generating a noise image according to the distribution of the characteristic points and the number of the characteristic points;
and generating virtual naps of the virtual prop based on the noise image.
In a second aspect, the present disclosure provides a virtual fluff generating device comprising:
the acquisition unit is used for acquiring attribute information of virtual fluff configuration of a user aiming at the virtual prop; the attribute information comprises the type and the density degree of the virtual fluff;
the characteristic point determining unit is used for determining the distribution of characteristic points in a preset image according to the types of the virtual villi; determining the number of feature points in a preset image according to the density degree of the virtual fluff;
and the generating unit is used for generating a noise image according to the distribution of the characteristic points and the number of the characteristic points, and generating virtual nap of the virtual prop based on the noise image.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the method of any of the first aspects of the disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of the first aspects of the present disclosure.
In a fifth aspect, the present disclosure provides a computer program product comprising instructions which, when run on a device, cause the device to perform the method of any one of the implementations of the first aspect described above.
From the above technical solution, the present disclosure has the following advantages:
the present disclosure provides a virtual pile generation method, in which, only the attribute information of virtual piles of a virtual prop is required to be input by a user, for example, the type and the degree of density of the virtual piles are input, so that the distribution of feature points in a preset image can be determined according to the type of the virtual piles, and the number of feature points in the preset image can be determined according to the degree of density of the virtual piles; then generating a noise image based on the distribution of the feature points and the number of the feature points, and then generating virtual naps of the virtual prop based on the noise image. Therefore, compared with the traditional method of manually repeatedly adjusting each virtual fluff, the method can greatly simplify the operation of a user, improve the generation efficiency of the virtual fluff and improve the user experience.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present disclosure, the drawings that are required to be used in the embodiments will be briefly described below.
Fig. 1 is a schematic diagram of an application scenario of a virtual fluff generating method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a virtual fluff generation method provided by an embodiment of the present disclosure;
FIG. 3A is a schematic diagram of a configuration interface provided by an embodiment of the present disclosure;
FIG. 3B is a schematic diagram of yet another configuration interface provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of determining feature points according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a preset image according to an embodiment of the disclosure;
FIG. 6A is a schematic diagram of a noise image provided by an embodiment of the present disclosure;
FIG. 6B is a schematic diagram of yet another noise image provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a virtual fluff generating device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The terms "first," "second," and the like in the presently disclosed embodiments are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
Some technical terms related to the embodiments of the present disclosure will be first described.
Virtual pile refers to virtual pile, which may be natural pile (e.g., animal pile) or artificial pile (e.g., chemical fiber carpet). Adding virtual naps to virtual props (e.g., gloves, head covers) can increase the popularity of the virtual props. In the case of the virtual prop, the application can be improved in heat and liveness when the virtual prop is provided in the short video application or game application scene.
The virtual nap is often generated based on the noise image, and the virtual nap of the virtual prop is further adjusted by adjusting the noise in the noise image, wherein each noise in the noise image can be uniquely corresponding to one virtual nap. At present, in the process of designing virtual fluff, a user needs to manually and repeatedly adjust each noise point in the noise point image, which is time-consuming and labor-consuming, increases the complexity of designing the virtual fluff, and reduces the generation efficiency of the virtual fluff.
In view of this, the embodiments of the present disclosure provide a virtual fluff generating method that may be performed by an electronic device. The electronic device refers to a device having data processing capability, and may be a server or a terminal. Among them, the terminals include, but are not limited to, smart phones, tablet computers, notebook computers, personal digital assistants (personal digital assistant, PDA), or smart wearable devices, etc. The server may be a cloud server, for example, a central server in a central cloud computing cluster, or an edge server in an edge cloud computing cluster. Of course, the server may also be a server in a local data center. The local data center refers to a data center directly controlled by a user.
The method comprises the steps that electronic equipment obtains attribute information of virtual fluff configuration of a user aiming at a virtual prop, the attribute information comprises the types and the density degrees of the virtual fluff, distribution of characteristic points in a preset image is determined according to the types of the virtual fluff, and the number of the characteristic points in the preset image is determined according to the density degrees of the virtual fluff; then generating the noise point image according to the distribution of the characteristic points and the quantity of the characteristic points; and then generating virtual fluff of the virtual prop by using the noise image. In the method, the user can adjust the virtual fluff of the virtual prop only by inputting the attribute information of the virtual fluff of the virtual prop without manually adjusting the noise image.
Fig. 1 is a schematic diagram of an application scenario of a virtual fluff generating method according to an embodiment of the present disclosure. The user can configure attribute information of virtual piles of the virtual props on the electronic equipment, the electronic equipment automatically generates a noise image based on the attribute information configured by the user, and then the virtual piles of the virtual props are generated by utilizing the noise image. In this application scenario, the user only needs to simply configure the degree and type of the density of the virtual fluff, so that the electronic device can automatically generate the virtual fluff according with the user requirement (i.e. corresponding to the attribute information configured by the user). Therefore, the operation of a user is simplified, the efficiency of generating virtual fluff is improved, and the user experience is improved.
It should be noted that, the virtual fluff generating method provided by the embodiment of the present disclosure may be executed by the electronic device alone, or may be executed by the electronic device and the server cooperatively, and when the virtual fluff generating method is executed by the electronic device alone, it indicates that the electronic device may execute the virtual fluff generating method offline.
In order to make the technical solution of the present disclosure clearer and easier to understand, the virtual fluff generating method provided by the embodiments of the present disclosure is described below with reference to the accompanying drawings at the angle of the electronic device. As shown in fig. 2, the fig. is a flowchart of a virtual fluff generating method according to an embodiment of the disclosure, where the method includes:
s201, the electronic equipment acquires attribute information of virtual fluff configuration of a user aiming at the virtual prop.
Virtual props may refer to virtual ornaments, such as virtual gloves, headgear, and the like. These virtual props typically have virtual naps that typically create a better interactive experience, increasing popularity. For the same or same type of virtual prop, the attribute information of the virtual pile can influence the presentation effect of the virtual pile. For example, the type of virtual fluff, the degree of density of the virtual fluff, the degree of thickness of the virtual fluff, and the like.
In some examples, the electronic device may present a human-machine interaction interface. As shown in fig. 3A, the figure is a schematic diagram of a configuration interface provided by an embodiment of the disclosure. The configuration interface includes a category configuration control 311, a density degree configuration control 312, and a generation control 320. The category configuration control 311 is configured to configure the category of the virtual fluff, for example, the user may click on the category configuration control 311, the electronic device may present a drop-down frame for the user to select the category of the virtual fluff to be configured, and similarly, the density degree configuration control 312 is configured to configure the density degree of the virtual fluff, for example, the user may click on the density degree configuration control 312, and the electronic device may present a drop-down frame for the user to select the category of the virtual fluff to be configured. Of course, in other examples, the user may also directly input the kind of virtual fluff to be configured and the degree of density of the virtual fluff to be configured. A generation control 320 is used to generate the noise image. And after the electronic equipment receives the attribute information of the virtual fluff configured by the user, automatically generating a noise image based on the attribute information.
In other embodiments, as shown in FIG. 3B, a schematic diagram of yet another configuration interface provided by embodiments of the present disclosure is provided. Compared with the configuration interface shown in fig. 3A, the configuration interface shown in fig. 3B further includes a thickness degree configuration control 313, that is, the user can configure the thickness degree of the virtual nap of the virtual prop, so that the electronic device generates the virtual nap corresponding to the thickness degree configured by the user, thereby meeting the user requirement. The thickness degree configuration control 313 is used for configuring the thickness degree of the virtual fluff, for example, the user can click on the thickness degree configuration control 313, and the electronic device can present a drop-down frame for the user to select the thickness degree of the virtual fluff to be configured. Of course, in other examples, the user may also directly input the thickness of the virtual fluff to be configured.
In the embodiment of the disclosure, the electronic device provides the selectable attribute information for the user in a drop-down frame manner, so that the input operation of the user can be further simplified, and the efficiency of generating the virtual fluff is further improved.
It should be noted that the above-described electronic device acquiring attribute information is merely an example. In other embodiments, the user may configure the configuration file of the attribute information in advance, and then import the configuration file of the attribute information to the electronic device, so that the electronic device obtains the attribute information based on the configuration file of the attribute information.
S202, the electronic equipment determines distribution of feature points in a preset image according to the types of the virtual fluff.
The types of the virtual fluff are different, and the distribution of the characteristic points in the preset image is different, namely the types of the virtual fluff can determine the distribution of the characteristic points in the preset image. Based on this, an association relationship between the kind of virtual nap and the distribution of the feature points in the preset image may be established in advance, and then the electronic device may determine the distribution of the feature points in the preset image based on the kind of virtual nap configured by the user.
In some examples, the types of virtual fluff may be classified into natural fluff and artificial fluff based on a formation manner, the natural fluff may be animal fluff, and the animal fluff generally conforms to poisson distribution, so that an association relationship between the animal fluff and poisson distribution may be constructed, and similarly, the artificial fluff may be a chemical fiber carpet, and the chemical fiber carpet generally conforms to uniform distribution, so that an association relationship between the artificial fluff and the uniform distribution may be constructed. Table 1 shows an association of virtual fluff types and feature point distribution:
TABLE 1
Species of type Distribution of
Natural fluff Poisson distribution
Artificial fluff Evenly distributed
…… ……
As shown in table 1, when the type of the virtual fluff configured by the user is natural fluff, the electronic device may determine, through the lookup table 1, a distribution manner corresponding to the natural fluff configured by the user, that is, determine that the distribution of the feature points in the preset image is poisson distribution. When the type of the virtual fluff configured by the user is artificial fluff, the electronic device can determine a distribution mode corresponding to the artificial fluff configured by the user through the lookup table 1, namely, determine that the distribution of the feature points in the preset image is uniform.
S203, the electronic equipment determines the number of feature points in the preset image according to the density degree of the virtual fluff.
The density degree of the virtual fluff is different, and the number of the characteristic points in the preset image is different, namely the density degree of the virtual fluff can determine the number of the characteristic points in the preset image. In some examples, when the degree of the density of the virtual pile characterizes the sparser the virtual pile, the fewer the number of feature points in the preset image; when the degree of the density of the virtual fluff indicates that the virtual fluff is denser, the number of the characteristic points in the preset image is larger. Based on the above, an association relationship between the degree of the density of the virtual fluff and the number of the feature points in the preset image may be established in advance, and then the electronic device may determine the number of the feature points in the preset image based on the degree of the density of the virtual fluff configured by the user.
In some examples, the degree of density of the virtual fluff may be divided into a plurality of levels, with higher levels representing denser virtual fluff and lower levels representing sparser virtual fluff. Table 2 shows the association between the degree of the density of the virtual fluff and the number of feature points in the preset image:
TABLE 2
Degree of density Quantity of
Grade 1 1000
Class 2 900
……
As shown in table 2, when the user configures the degree of the density of the virtual fluff to be level 1, the electronic device may determine, through the lookup table 2, the number of feature points corresponding to the degree of the density configured by the user to be level 1, that is, determine the number of feature points in the preset image to be 1000. When the degree of the density of the configured virtual fluff is level 2, the electronic device may determine, through the lookup table 2, the number of feature points corresponding to the degree of the density of the configured artificial fluff is level 2, that is, the number of feature points in the preset image is determined to be 900.
It should be noted that the specific degree of density and the amounts shown in table 2 are merely exemplary, and that other degrees of density and other amounts are possible in other embodiments.
It should be noted that, the embodiment of the present disclosure is not specifically limited to the execution sequence of S202 and S203, and in other embodiments, the electronic device may execute S203 first, then execute S202, or may execute S202 and S203 simultaneously.
In other embodiments, the electronic device may also receive a user-configured thickness of the virtual fluff. The thickness degree of the virtual fluff is different, and the pixel sets corresponding to the feature points in the preset image are different, namely the thickness degree of the virtual fluff can determine the pixel sets corresponding to the feature points in the preset image. The thickness degree of the virtual fluff configured by a user represents that when the virtual fluff is thicker, more pixels are in a pixel set corresponding to the feature points in a preset image; the thickness degree of the virtual fluff configured by the user indicates that the finer the virtual fluff is, the fewer pixels in a pixel set corresponding to the feature points in the preset image are.
S204, the electronic equipment generates a noise image according to the distribution of the characteristic points and the number of the characteristic points.
In some examples, when the type of virtual nap configured by the user is natural nap, the electronic device may generate feature points conforming to Poisson distribution in the preset image based on Poisson distribution (Poisson-Disc) algorithm and the number of feature points. The electronic device first randomly generates a first sample point and marks it as an active sample point. And then, entering a cycle, and randomly taking an active sampling point in each cycle, wherein R and 2R (0 < R < 1) are annulus radii by taking the active sampling point as a circle center, so as to obtain an annular region. K (K > 0) candidate sampling points are then randomly generated within the annular region. If the distance between the candidate sampling point and all the existing sampling points is larger than R, the candidate sampling point is also marked as an active sampling point; if the distance between the candidate sampling point and all the existing sampling points is not greater than R, the active sampling point is marked as an inactive sampling point. And then entering the next round of circulation until no active sampling points exist in all the sampling points, and further obtaining inactive sampling points. The inactive sampling points are feature points conforming to poisson distribution.
For ease of understanding, the following description is provided in connection with the accompanying drawings. As shown in fig. 4, the method is a flowchart of a method for generating feature points, and the method includes:
S401, the electronic equipment acquires R and K.
R represents the minimum allowable distance between any two sampling points, and is related to the density degree of the virtual fluff, namely, the association relation between R and the number of characteristic points can be set, when the R value is larger, the sparse virtual fluff can be obtained, and when the R value is smaller, the dense virtual fluff can be obtained. K may be a constant, for example, K may be 5 or 10. If the distance between the K candidate sampling point and all the existing sampling points is larger than R, the candidate sampling point is marked as an inactive sampling point.
S402, the electronic device randomly generates a first sampling point and marks the first sampling point as an active sampling point.
S403, the electronic equipment randomly selects one active sampling point from the active sampling points, takes the active sampling point as a circle center, obtains an annular region based on the radius of the annular region, and randomly generates K candidate sampling points in the annular region.
S404, if all the K candidate sampling points pass the screening, S405 is executed, otherwise S406 is executed.
The K candidate sampling points are all filtered, namely, the distances between the K candidate sampling points and all the existing sampling points are larger than R.
S405, the electronic device marks the active sampling point as an inactive sampling point.
S406, the electronic equipment marks the candidate sampling points with the distances between the K candidate sampling points and all the existing sampling points being larger than R as active sampling points.
S407, if all the sampling points are inactive sampling points, the Poisson distribution algorithm is ended, otherwise, the loop execution is continued from S403.
Through the method, the electronic equipment can automatically generate the characteristic points which accord with the natural fluff (Poisson distribution) and the density degree, so that a user does not need to manually adjust the position of each characteristic point, the distance between the characteristic points and the like one by one, the characteristic points which accord with the requirements can be obtained, the user operation is simplified, and the efficiency is improved.
In other examples, when the type of virtual pile configured by the user is an artificial pile, the electronic device may generate, in the preset image, feature points conforming to the uniform distribution based on the uniform distribution algorithm and the number of feature points. For example, the electronic device may divide the preset image into n×n grids, each of which has a center as a feature point, based on the number of feature points. As shown in fig. 5, the electronic device may divide the preset image into 3×3 grids, each of which has a center 501 as a feature point, which conforms to the poisson distribution. When the number of the feature points is larger, N is larger, and the density degree of the virtual fluff is denser; when the number of feature points is smaller, N is smaller, and the degree of density of the virtual fluff is sparse.
It should be noted that, the electronic device may calculate the feature points in advance based on the CPU, that is, determine the feature points in the preset image, so that in the process of subsequently generating the noise image, it is not necessary to repeatedly calculate the feature points in the GPU again.
After the electronic device generates the feature points in the preset image, a Voronoi (Voronoi) algorithm may be combined to generate the noise image, for example, through an image processor GPU.
In some examples, the electronic device may calculate a euclidean distance between a pixel (e.g., a center point of the pixel) and the feature point in the preset image, then compare the euclidean distance to a distance threshold, and add the pixel to the set of pixels when the euclidean distance is less than the distance threshold. Similarly, the electronic device may perform similar processing on each pixel and feature point in the preset image, so as to obtain a pixel set corresponding to each feature point, and use the pixel set as a noise point to obtain a noise point image.
In other examples, the electronic device may generate the pixel set corresponding to the feature point according to the thickness degree of the virtual fluff configured by the user and the distribution of the feature point in the preset image. Specifically, the electronic device determines a distance threshold corresponding to the thickness degree according to the thickness degree and a corresponding relation between the thickness degree and a preset distance, calculates a Euclidean distance between a pixel in a preset image and a characteristic point, and adds a pixel with the Euclidean distance between the pixel in the preset image and the characteristic point smaller than the distance threshold into the pixel set. The pixel set is obtained based on the thickness degree of the virtual fluff configured by a user, when the thickness degree of the virtual fluff represents that the virtual fluff is thicker, the more pixels in the pixel set are, the larger noise points obtained based on the pixel set with more pixels are, and the thicker the generated virtual fluff Mao Yue is based on the noise point image with larger noise points; when the thickness degree of the virtual fluff represents that the virtual fluff is finer, the fewer pixels in the pixel set, the smaller noise points are obtained based on the pixel set with the fewer pixels, and the finer the generated virtual fluff is based on the noise point image with the smaller noise points.
The electronic device may calculate the euclidean distance of the pixel to the feature point based on the following formula:
c=min((U-Xi) 2 +(V-Yi) 2 ),i=1,2,…n
c represents the euclidean distance between the nearest feature point to the pixel and the pixel, U represents the abscissa of the pixel, V represents the ordinate of the pixel, xi represents the abscissa of the ith feature point, and Yi represents the ordinate of the ith feature point.
It should be noted that the above formula is merely an example, and in the actual calculation process of the euclidean distance, the noise distribution dithering parameter may be used to offset the corresponding feature point, and the random noise size parameter may be used to weight the calculated euclidean distance.
In some embodiments, the electronic device may also present the noise image to the user. As shown in fig. 6A and 6B, fig. 6A shows a noise image conforming to the poisson distribution, and fig. 6B shows a noise image conforming to the uniform distribution. Taking fig. 6B as an example, the larger the white area, the more pixels in the pixel set corresponding to the characterization feature points, and the larger the noise, that is, the coarser the virtual velvet Mao Yue rendered based on the noise image; the white areas are uniformly distributed, and the generated virtual fluff is also relatively uniform, so that the method accords with the characteristic of uniform distribution of the artificial fluff.
S205, the electronic equipment generates virtual naps of the virtual prop based on the noise image.
After the electronic device generates the noise image, virtual fluff of the virtual prop can be rendered based on the generated noise image. In some embodiments, the electronic device may also present virtual pile of the virtual prop for the user to preview the effect of the virtual pile generated by the noise image.
Based on the above description, the embodiments of the present disclosure provide a virtual fluff generating method. In the method, the user can adjust the virtual fluff of the virtual prop only by inputting the attribute information of the virtual fluff of the virtual prop without manually adjusting the noise image. Further, the method utilizes the GPU to generate the noise image according to the attribute information of the virtual fluff of the virtual prop configured by the user, so that the virtual fluff corresponding to the noise image can be generated more efficiently, and the efficiency is further improved. In addition, the electronic equipment (such as a design end) can run relevant codes only when making the noise image, so that the burden of a client (equipment using the virtual prop) is not increased, and the method is convenient and friendly.
Fig. 7 is a schematic view of a virtual fluff generating device according to an exemplary disclosed embodiment, and as shown in fig. 7, the virtual fluff generating device 700 includes:
an obtaining unit 701, configured to obtain attribute information of virtual pile configuration of a user for a virtual prop; the attribute information comprises the type and the density degree of the virtual fluff;
a feature point determining unit 702, configured to determine a distribution of feature points in a preset image according to the type of the virtual nap; determining the number of feature points in a preset image according to the density degree of the virtual fluff;
a generating unit 703, configured to generate a noise image according to the distribution of the feature points and the number of the feature points, and generate a virtual pile of the virtual prop based on the noise image.
Optionally, the attribute information further includes a thickness degree of the virtual fluff, and the generating unit 703 is configured to generate a pixel set corresponding to the feature point according to the thickness degree and a distribution of the feature point in the preset image; and generating a noise image according to the number of pixel sets of the feature points.
Optionally, the generating unit 703 is configured to determine a distance threshold corresponding to the thickness degree according to the thickness degree and a correspondence between a preset thickness degree and a preset distance; and adding pixels, of which the distance between the pixels in the preset image and the feature points is smaller than the distance threshold, into a pixel set corresponding to the feature points.
Optionally, the feature point determining unit 702 is configured to determine that the distribution of feature points in the preset image is poisson distribution when the type of the virtual nap is a natural nap.
Optionally, the feature point determining unit 702 is configured to determine that the distribution of feature points in the preset image is uniform when the type of the virtual pile is an artificial pile.
Optionally, the device further comprises a display unit for presenting the virtual pile of the virtual prop to a user.
Optionally, the generating unit 703 is configured to generate, by using a graphics processor GPU, a noise image according to the distribution of the feature points and the number of the feature points.
The functions of the above modules are described in detail in the method steps in the above embodiment, and are not described herein.
Referring now to fig. 8, a schematic diagram of a configuration of an electronic device 800 suitable for use in implementing embodiments of the present disclosure is shown, which may be used to implement the corresponding functions of the virtual fluff generating device 700 shown in fig. 7. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 shows an electronic device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring attribute information of virtual fluff configuration of a user aiming at a virtual prop; the attribute information comprises the type and the density degree of the virtual fluff; determining the distribution of feature points in a preset image according to the types of the virtual villi; determining the number of feature points in a preset image according to the density degree of the virtual fluff; generating a noise image according to the distribution of the characteristic points and the number of the characteristic points; and generating virtual naps of the virtual prop based on the noise image.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module is not limited to the module itself in some cases, and for example, the first acquisition module may also be described as "a module that acquires at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides a virtual fluff generating method according to one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, example 2 provides the method of example 1, the attribute information further including a thickness degree of the virtual fluff; the generating a noise image according to the distribution of the feature points and the number of the feature points includes: generating a pixel set corresponding to the feature points according to the thickness degree and the distribution of the feature points in the preset image; and generating a noise image according to the number of pixel sets of the feature points.
According to one or more embodiments of the present disclosure, example 3 provides the method of example 2, wherein the generating, according to the thickness degree and the distribution of the feature points in the preset image, a pixel set corresponding to the feature points includes: determining a distance threshold corresponding to the thickness degree according to the thickness degree and the corresponding relation between the preset thickness degree and the preset distance; and adding pixels, of which the distance between the pixels in the preset image and the feature points is smaller than the distance threshold, into a pixel set corresponding to the feature points.
According to one or more embodiments of the present disclosure, example 4 provides the method of examples 1 to 3, the determining a distribution of feature points in a preset image according to the kind of the virtual nap, including: and when the types of the virtual fluffs are natural fluffs, determining the distribution of the characteristic points in the preset image as poisson distribution.
According to one or more embodiments of the present disclosure, example 5 provides the method of examples 1 to 3, the determining a distribution of feature points in a preset image according to the kind of the virtual nap, including: and when the types of the virtual fluff are artificial fluff, determining that the distribution of the characteristic points in the preset image is uniform.
Example 6 provides the method of example 1, according to one or more embodiments of the present disclosure, the method further comprising: and presenting the virtual pile of the virtual prop to a user.
According to one or more embodiments of the present disclosure, example 7 provides the method of example 1, the generating a noise image from the distribution of the feature points and the number of feature points, comprising: and generating a noise image through a graphic processor GPU according to the distribution of the characteristic points and the quantity of the characteristic points.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.

Claims (11)

1. A virtual fluff generating method, comprising:
acquiring attribute information of virtual fluff configuration of a user aiming at a virtual prop; the attribute information comprises the type and the density degree of the virtual fluff;
determining the distribution of feature points in a preset image according to the types of the virtual villi; determining the number of feature points in a preset image according to the density degree of the virtual fluff;
generating a noise image according to the distribution of the characteristic points and the number of the characteristic points;
and generating virtual naps of the virtual prop based on the noise image.
2. The method according to claim 1, wherein the attribute information further includes a thickness degree of the virtual fluff; the generating a noise image according to the distribution of the feature points and the number of the feature points includes:
generating a pixel set corresponding to the feature points according to the thickness degree and the distribution of the feature points in the preset image;
and generating a noise image according to the number of pixel sets of the feature points.
3. The method according to claim 2, wherein the generating the pixel set corresponding to the feature point according to the thickness degree and the distribution of the feature point in the preset image includes:
Determining a distance threshold corresponding to the thickness degree according to the thickness degree and the corresponding relation between the preset thickness degree and the preset distance;
and adding pixels, of which the distance between the pixels in the preset image and the feature points is smaller than the distance threshold, into a pixel set corresponding to the feature points.
4. A method according to any one of claims 1-3, wherein said determining the distribution of feature points in a preset image according to the type of virtual pile comprises:
and when the types of the virtual fluffs are natural fluffs, determining the distribution of the characteristic points in the preset image as poisson distribution.
5. A method according to any one of claims 1-3, wherein said determining the distribution of feature points in a preset image according to the type of virtual pile comprises:
and when the types of the virtual fluff are artificial fluff, determining that the distribution of the characteristic points in the preset image is uniform.
6. The method according to claim 1, wherein the method further comprises:
and presenting the virtual pile of the virtual prop to a user.
7. The method of claim 1, wherein generating a noise image from the distribution of the feature points and the number of feature points comprises:
And generating a noise image through a graphic processor GPU according to the distribution of the characteristic points and the quantity of the characteristic points.
8. A virtual fluff generating device, comprising:
the acquisition unit is used for acquiring attribute information of virtual fluff configuration of a user aiming at the virtual prop; the attribute information comprises the type and the density degree of the virtual fluff;
the characteristic point determining unit is used for determining the distribution of characteristic points in a preset image according to the types of the virtual villi; determining the number of feature points in a preset image according to the density degree of the virtual fluff;
and the generating unit is used for generating a noise image according to the distribution of the characteristic points and the number of the characteristic points, and generating virtual nap of the virtual prop based on the noise image.
9. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method of any one of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processing device, carries out the steps of the method according to any one of claims 1 to 7.
11. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1 to 7.
CN202111633028.5A 2021-12-28 2021-12-28 Virtual fluff generation method, device, equipment, medium and product Pending CN116402980A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111633028.5A CN116402980A (en) 2021-12-28 2021-12-28 Virtual fluff generation method, device, equipment, medium and product
PCT/CN2022/139558 WO2023125071A1 (en) 2021-12-28 2022-12-16 Virtual fluff generation method and apparatus, device, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633028.5A CN116402980A (en) 2021-12-28 2021-12-28 Virtual fluff generation method, device, equipment, medium and product

Publications (1)

Publication Number Publication Date
CN116402980A true CN116402980A (en) 2023-07-07

Family

ID=86997695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633028.5A Pending CN116402980A (en) 2021-12-28 2021-12-28 Virtual fluff generation method, device, equipment, medium and product

Country Status (2)

Country Link
CN (1) CN116402980A (en)
WO (1) WO2023125071A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010118B (en) * 2017-11-28 2021-11-30 杭州易现先进科技有限公司 Virtual object processing method, virtual object processing apparatus, medium, and computing device
CN111311757B (en) * 2020-02-14 2023-07-18 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111462313B (en) * 2020-04-02 2024-03-01 网易(杭州)网络有限公司 Method, device and terminal for realizing fluff effect
US11348325B2 (en) * 2020-05-06 2022-05-31 Cds Visual, Inc. Generating photorealistic viewable images using augmented reality techniques
CN113822981B (en) * 2020-06-19 2023-12-12 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2023125071A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
US10958850B2 (en) Electronic device and method for capturing image by using display
CN111476871B (en) Method and device for generating video
CN110782515A (en) Virtual image generation method and device, electronic equipment and storage medium
CN110288625B (en) Method and apparatus for processing image
US11934814B2 (en) Application porting method and apparatus, device, and medium
WO2022142875A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN109272050B (en) Image processing method and device
CN112181568A (en) Locally adapting screen method and apparatus
WO2023138498A1 (en) Method and apparatus for generating stylized image, electronic device, and storage medium
CN107203312B (en) Mobile terminal and picture rendering method and storage device thereof
US20170249192A1 (en) Downloading visual assets
CN110189252B (en) Method and device for generating average face image
US11537213B2 (en) Character recommending method and apparatus, and computer device and storage medium
WO2022142876A1 (en) Image processing method and apparatus, electronic device and storage medium
US20230360286A1 (en) Image processing method and apparatus, electronic device and storage medium
WO2021263264A1 (en) Applying stored digital makeup enhancements to recognized faces in digital images
CN116402980A (en) Virtual fluff generation method, device, equipment, medium and product
US20230237625A1 (en) Video processing method, electronic device, and storage medium
CN114926322B (en) Image generation method, device, electronic equipment and storage medium
CN111223105B (en) Image processing method and device
CN111597476A (en) Image processing method and device
CN111292245A (en) Image processing method and device
WO2023005357A1 (en) Training method for image style transfer model, and image style transfer method and apparatus
CN114647472B (en) Picture processing method, apparatus, device, storage medium, and program product
CN112668474B (en) Plane generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination