CN111445563B - Image generation method and related device - Google Patents

Image generation method and related device Download PDF

Info

Publication number
CN111445563B
CN111445563B CN202010207706.0A CN202010207706A CN111445563B CN 111445563 B CN111445563 B CN 111445563B CN 202010207706 A CN202010207706 A CN 202010207706A CN 111445563 B CN111445563 B CN 111445563B
Authority
CN
China
Prior art keywords
map
shape
dimension
noise
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010207706.0A
Other languages
Chinese (zh)
Other versions
CN111445563A (en
Inventor
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010207706.0A priority Critical patent/CN111445563B/en
Publication of CN111445563A publication Critical patent/CN111445563A/en
Application granted granted Critical
Publication of CN111445563B publication Critical patent/CN111445563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application discloses a method and a related device for generating an image, wherein a shape map related to a target image is obtained, and then a noise map of the shape map under at least two dimensions is determined according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method and the device realize the image generation process of simulating the complex three-dimensional image based on a small number of maps, and improve the image generation efficiency because the material of image synthesis is obtained by the noise transformation of the shape maps, thereby not occupying a large amount of resources, not needing repeated map superposition operation.

Description

Image generation method and related device
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image generation method and a related apparatus.
Background
With the development of related technologies of mobile terminals, people have higher and higher requirements on images. In a game, a complex three-dimensional model needs to be displayed, and image analog conversion of the complex three-dimensional model needs to be carried out; for example, simulation of clouds, i.e., based on the formation and shape of the cloud in the physical world, the cloud is simulated using an appropriate noise model.
In general, the display of complex three-dimensional models can be performed using a stack of a large number of model maps, for example: for the simulation of clouds, 5 to 400 semi-transparent model map material per cloud is used. When the model mapping material is rendered, the model mapping material always faces the camera, and then a three-dimensional cloud image is generated in a back-to-front superposition mode.
However, due to the superposition of a large number of semi-transparent maps, various maps of different types are needed, and the memory occupation is large; in the process of gradual superposition, a large amount of redundant and complicated operations are needed, so that the efficiency of the image generation process is low.
Disclosure of Invention
In view of this, the present application provides an image generation method, which can effectively avoid the low image generation efficiency caused by processing a large number of maps, and improve the efficiency of the image generation process.
A first aspect of the present application provides an image generating method, which may be applied to a system or a program including an image generating function in a terminal device, and specifically includes: acquiring a shape map, wherein the shape map is related to a target image;
determining noise maps of the shape maps under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape maps, and the color channel information corresponds to the noise maps;
and performing image synthesis based on the noise map to generate the target image.
Optionally, in some possible implementation manners of the present application, the determining the noise maps of the shape map in at least two dimensions according to a preset rule includes:
determining at least one of the first dimension maps based on shape features of the shape map, the shape features determined based on a pixel distribution of the shape map;
adjusting according to the first dimension map as a template to determine the second dimension map;
the image synthesis based on the noise map to generate the target image comprises:
and carrying out image synthesis according to the first dimension map and the second dimension map to generate the target image.
Optionally, in some possible implementations of the present application, the determining at least one first dimension map based on shape features of the shape map includes:
acquiring a pixel grid of the shape map, wherein the pixel grid comprises a plurality of pixels;
determining at least one feature point in the pixel grid;
and distributing noise values according to the distance between the characteristic points and the pixels to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the method further includes:
determining an image type indicated by the shape map;
and inverting the noise value according to the image type to update the first dimension map.
Optionally, in some possible implementations of the present application, the distributing noise values according to the distances between the feature points and the pixels to obtain at least one first dimension map includes:
distributing noise values according to the distance between the characteristic points and the pixels;
and carrying out frequency adjustment on the noise value to obtain at least one first dimension map.
Optionally, in some possible implementation manners of the present application, the adjusting according to the first dimension map as a template to determine the second dimension map includes:
inverting the first dimension map;
processing the inverted first dimension map according to fractal Brownian motion;
and intercepting the noise value in the processed first dimension map based on a preset threshold value to obtain the second dimension map.
Optionally, in some possible implementations of the present application, the performing image synthesis according to the first dimension map and the second dimension map to generate the target image includes:
adjusting the first dimension map to different frequencies based on a preset function;
and overlapping the first dimension maps with different frequencies by taking the second dimension map as a base to generate the target image.
Optionally, in some possible implementations of the present application, the method further includes:
acquiring the hierarchical characteristics of the shape map;
acquiring a detail map according to the level features, wherein the detail map is obtained by adjusting the frequency based on the noise map;
updating the target image based on the detail map.
Optionally, in some possible implementations of the present application, the method further includes:
performing rotation variation on the noise map to obtain a rotation map;
and updating the target image based on the rotation map.
Optionally, in some possible implementation manners of the present application, the obtaining a shape map includes:
acquiring a three-dimensional shape model;
and performing ray tracing on the three-dimensional shape model in a target interface space to obtain the shape chartlet, wherein the target interface space is a space where the target image is located.
Optionally, in some possible implementations of the present application, the performing ray tracing on the three-dimensional shape model in a target interface space to obtain the shape map includes:
obtaining distance information from a light source to the three-dimensional shape model, wherein the light source is a reference point in the target interface space;
performing a calculation of a distance field function based on the distance information to obtain a light pixel distribution;
and generating the shape map according to the light pixel distribution.
Optionally, in some possible implementations of the present application, the shape map includes a shape of a cloud, and the target image is used to indicate the shape of the cloud in three dimensions.
A second aspect of the present application provides an apparatus for image generation, comprising: an acquisition unit configured to acquire a shape map, the shape map being associated with a target image;
a determining unit, configured to determine a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map;
a generating unit configured to perform image synthesis based on the noise map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to determine at least one first dimension map based on a shape feature of the shape map, where the shape feature is determined based on a pixel distribution of the shape map;
the determining unit is specifically configured to perform adjustment according to the first dimension map as a template to determine the second dimension map;
the generating unit is specifically configured to perform image synthesis according to the first dimension map and the second dimension map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to obtain a pixel grid of the shape map, where the pixel grid includes a plurality of pixels;
the determining unit is specifically configured to determine at least one feature point in the pixel grid;
the determining unit is specifically configured to obtain at least one first dimension map according to a distance distribution noise value between the feature point and the pixel.
Optionally, in some possible implementations of the present application, the determining unit is further configured to determine an image type indicated by the shape map;
the determining unit is further configured to invert the noise value according to the image type to update the first dimension map.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to distribute noise values according to distances between the feature points and the pixels;
the determining unit is specifically configured to perform frequency adjustment on the noise value to obtain at least one first dimension map.
Optionally, in some possible implementation manners of the present application, the determining unit is specifically configured to invert the first dimension map;
the determining unit is specifically configured to process the inverted first dimension map according to fractal brownian motion;
the determining unit is specifically configured to intercept the processed noise value in the first dimension map based on a preset threshold to obtain the second dimension map.
Optionally, in some possible implementation manners of the present application, the generating unit is specifically configured to adjust the first dimension map to different frequencies based on a preset function;
the generating unit is specifically configured to superimpose the first dimension maps with different frequencies based on the second dimension map to generate the target image.
Optionally, in some possible implementation manners of the present application, the generating unit is further configured to obtain a hierarchical feature of the shape map;
the generation unit is further used for acquiring a detail map according to the level features, and the detail map is obtained by performing frequency adjustment on the basis of the noise map;
the generating unit is further configured to update the target image based on the detail map.
Optionally, in some possible implementation manners of the present application, the generating unit is further configured to perform rotation variation on the noise map to obtain a rotation map;
the generating unit is further configured to update the target image based on the rotation map.
Optionally, in some possible implementation manners of the present application, the obtaining unit is specifically configured to obtain a three-dimensional shape model;
the obtaining unit is specifically configured to perform ray tracing on the three-dimensional shape model in a target interface space to obtain the shape map, where the target interface space is a space where the target image is located.
Optionally, in some possible implementation manners of the present application, the obtaining unit is specifically configured to obtain distance information between a light source and the three-dimensional shape model, where the light source is a reference point in the target interface space;
the obtaining unit is specifically configured to perform calculation of a distance field function based on the distance information to obtain light pixel distribution;
the obtaining unit is specifically configured to generate the shape map according to the light pixel distribution.
A third aspect of the present application provides a computer device comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is configured to perform the method of image generation according to any of the above first aspect or the first aspect according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of image generation of the first aspect or any one of the first aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
the method comprises the steps of obtaining a shape map related to a target image, and then determining a noise map of the shape map under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram of a network architecture in which an image generation system operates;
fig. 2 is a flowchart of image generation according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for image generation according to an embodiment of the present disclosure;
fig. 4 is a scene schematic diagram of an image generation method according to an embodiment of the present application;
fig. 5 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 6 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 7 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
FIG. 8 is a flow chart of another method of image generation provided by embodiments of the present application;
FIG. 9 is a scene flowchart of a ray tracing method according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating another exemplary method for ray tracing according to an embodiment of the present disclosure;
fig. 11 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 12 is a scene schematic diagram of another image generation method according to an embodiment of the present application;
fig. 13 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 14 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image generation method and a related device, which can be applied to a system or a program containing an image generation function in terminal equipment, wherein a shape map related to a target image is obtained, and then a noise map of the shape map under at least two dimensions is determined according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method and the device realize the image generation process of simulating the complex three-dimensional image based on a small number of maps, and improve the image generation efficiency because the material of image synthesis is obtained by the noise transformation of the shape maps, thereby not occupying a large amount of resources, not needing repeated map superposition operation.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms that may appear in the embodiments of the present application are explained.
Noise mapping: a map generated by performing a targeted process on a certain color channel of an image, for example: and calculating Worley noise of the R channel of the original mapping chart.
Color channel: i.e., RGBA channels, are channels representing Red (Red), green (Green), blue (Blue), and transparency (Alpha), respectively.
Worley noise: a noise conversion method based on point transformation is also called cell noise.
Perlin noise: defining a plurality of vertexes, wherein each vertex contains a random gradient vector, the vertexes can generate potential energy influence on surrounding coordinates according to the gradient vector of the vertexes, the potential energy is higher along the ascending of the gradient direction of the vertexes, and further when an output value of a certain coordinate is required to be obtained, the potential energy caused by each vertex nearby the coordinate needs to be superposed, so that a noise generation method for outputting the total potential energy is obtained, and the noise generation method is mainly used for irregular disturbance.
Perlin-Worley noise: the application provides a method for carrying out Worley noise processing based on Perlin noise.
Fractal brownian motion: the method is used for improving the fluidity of the image by superposing noise in a loop and reducing the amplitude of the noise by a certain ratio while continuously increasing the frequency by a certain multiple.
It should be understood that the image generation method provided by the present application may be applied to a system or a program that includes an image generation function in a terminal device, for example, an image rendering platform, specifically, the image generation system may operate in a network architecture as shown in fig. 1, which is a network architecture diagram of the image generation system, as can be seen in the figure, the image generation system may provide image generation with multiple information sources, the terminal establishes a connection with a server through a network, and then receives multiple map contents sent by the server, and then the terminal performs simulation rendering on a model in a scene according to the image generation method in the present application, so as to display the model; it is understood that fig. 1 shows various terminal devices, in an actual scene, there may be more or fewer types of terminal devices participating in the image generation process, the specific number and type depend on the actual scene, and this is not limited herein, and in addition, fig. 1 shows one server, but in an actual scene, there may also be participation of multiple servers, especially in a scene of multi-content application interaction, the specific number of servers depends on the actual scene.
It should be noted that the image generation method provided in this embodiment may also be performed offline, that is, without the participation of a server, at this time, the terminal is connected with other terminals locally, and then the process of image generation between terminals is performed.
It is understood that the image generation system described above may be run on a personal mobile terminal, such as: the application as a game platform can also run on a server and can also run on a third-party device to provide image generation so as to obtain an image generation processing result of an information source; the specific image generation system may be operated in the above device in the form of a program, may also be operated as a system component in the above device, and may also be used as one of cloud service programs, and the specific operation mode is determined by an actual scene, which is not limited herein.
With the development of the related technologies of mobile terminals, people have higher and higher requirements on images. In a game, the display of a complex three-dimensional model needs to be realized, and the image analog conversion of the complex three-dimensional model needs to be carried out; for example, simulation of clouds, i.e., based on the formation and shape of the cloud in the physical world, the cloud is simulated using an appropriate noise model.
In general, the display of complex three-dimensional models can be performed using a stack of a large number of model maps, for example: for the simulation of clouds, 5 to 400 semi-transparent model map material per cloud is used. When the model mapping material is rendered, the model mapping material always faces the camera, and then a three-dimensional cloud image is generated in a back-to-front superposition mode.
However, due to the superposition of a large number of semi-transparent maps, various different types of maps are required, and the memory occupation is large; in the process of gradual superposition, a large amount of redundant and complicated operations are needed, so that the efficiency of the image generation process is low.
In order to solve the above problem, the present application proposes an image generation method, which is applied to the flow framework of image generation shown in fig. 2, and as shown in fig. 2, for the flow framework of image generation provided in the embodiment of the present application, first, a relevant basic shape map is obtained from a server, then, different noise processing is performed based on the shape maps to obtain a plurality of noise maps, and then, the noise maps are superimposed to generate a target image.
It is understood that the method provided by the present application may be a program written as a processing logic in a hardware system, or may be an image generating apparatus, and the processing logic is implemented in an integrated or external manner. As an implementation manner, the image generation apparatus obtains a shape map associated with a target image, and then determines a noise map of the shape map in at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
With reference to the above flow architecture, the following describes an image generation method in the present application, please refer to fig. 3, where fig. 3 is a flow chart of an image generation method provided in an embodiment of the present application, and the embodiment of the present application at least includes the following steps:
301. a computer device obtains a shape map.
In this embodiment, the shape map is associated with the target image, for example: and if the target image is a three-dimensional cloud image, the shape chartlet is a chartlet containing the basic shape of the cloud, and the chartlet is displayed based on the coordinates of the current display interface.
It will be appreciated that the shape map may be sent by a server, stored by the computer device itself, or transformed by a three-dimensional model, such as: and carrying out ray tracing on the three-dimensional model so as to obtain image display under the current interface.
302. The computer device determines a noise map of the shape map in at least two dimensions according to preset rules.
In this embodiment, the preset rule is determined based on color channel information of the shape map, the color channel information corresponding to the noise map; wherein, the color channel includes R channel, G channel, B channel and a channel, then the noise map may be a noise process performed on the map under the formulated channel, for example: worley noise processing is adopted for the R channel, and in an actual scene, different color channels are processed according to the requirements of the model, for example: the first dimension mapping is performed for a G channel, a B channel, and an a channel, the second dimension mapping is performed for an R channel, and the specific color channel selection is determined by an actual scene, which is not limited herein.
Next, a preset rule for dividing different noise maps will be explained. Since three-dimensional images generally include background and foreground portions, in some model renderings, the background and foreground are blended with each other, for example: during the drawing of the cloud, the foreground is a protruding cloud, and the background is a fuzzy element containing a cloud shape. Therefore, in this embodiment, simulation generation may be performed on the complex model in which the background and the foreground interact with each other, that is, noise generation in at least two dimensions is performed, and the specific action may correspond to a model simulating cloud, smoke, or the like.
Specifically, at least one first dimension map is first determined based on shape features of the shape map, where the first dimension map is used to indicate texture features of the image, and since general texture features appear based on the distribution of pixels, shape features for the shape map may also be determined based on the distribution of pixels of the shape map, for example: in the image simulation of smoke, the pixel value of the center of smoke is higher than the pixel values of the surroundings. Further, to ensure the integrity of the display of the texture features, a plurality of first dimension maps can be determined based on the shape features of the shape maps, thereby improving the accuracy of the image. In one possible scenario, the first dimension map may be the G channel processed through Worley noise.
After the first dimension map is determined, adjusting can be performed according to the first dimension map as a template to determine a second dimension map, namely, loading the texture of the model on a corresponding background; and then carrying out image synthesis according to the first dimension map and the second dimension map so as to generate a target image.
Optionally, the determination of the shape feature may be performed based on pixel distribution, and specifically, as shown in fig. 4, is a scene schematic diagram of an image generation method provided in the embodiment of the present application; the pixel grid A1 and the feature point A2 are indicated in the figure, and the pixel grid A1 of the shape map is obtained firstly, wherein the pixel grid comprises a plurality of pixels; then at least one characteristic point A2 is determined in the pixel grid; and then distributing noise values according to the distance between the characteristic points and the pixels to obtain a first dimension map. That is, with the characteristic point A2 as the center, a gradual noise change is made outward, thereby exhibiting a convex effect as shown in fig. 4.
It will be appreciated that the noise may be lower due to the feature points of some models, for example: in the cloud model, the more white the cloud is raised toward the center, the noise value can be inverted. Specifically, the image type indicated by the shape map is determined first; the noise value is then inverted according to the image type to update the first dimension map, for example: and generating a cloud model. Fig. 5 is a schematic view of a scene of another image generation method provided in the embodiment of the present application; the graph indicates the result of inverting the noise value for the map in fig. 4, and the noise value of the feature point B1 in the graph is low, and appears in a white state, so that the feature of the cloud can be better simulated.
Optionally, since the first dimension map is to reflect texture features of the image, different texture features may be obtained if different methods are used to process the shape map, but a plurality of different processing methods may affect the efficiency of image generation, and at this time, frequency conversion may be performed based on the same method, so as to simulate texture features of different granularities. Specifically, noise values are first distributed according to the distance between the feature point and the pixel, such as gradually increasing, decreasing or some specific frequency; and carrying out frequency adjustment on the noise value to obtain at least one first dimension map. As shown in fig. 6, the scene diagram of another image generation method provided in the embodiment of the present application is shown, where an image in the diagram is a shape map subjected to Worley noise processing, and the frequencies of the shape map are sequentially increased from left to right, so that details of a model can be simulated well.
Optionally, since the second dimension map reflects a background of the model, and the background of the model may be associated with a foreground, generating the second dimension map based on the first dimension map may adopt a process shown in fig. 7, as shown in fig. 7, which is a scene schematic diagram of another image generation method provided in the embodiment of the present application; the figure shows that the first dimension map (Worley noise) is first inverted; then, in order to improve the fluidity of the background, the inverted first dimension map can be processed according to fractal Brownian motion; then, based on a preset threshold value, intercepting a noise value in the processed first dimension map to obtain a second dimension map, namely Perlin-Worley noise; thereby obtaining a background map with higher fusion degree with the foreground.
In a possible scenario, the target image may include images of a plurality of different types of models, and at this time, the images may be generated by using the processing manners with different dimensions in the above embodiments, where the specific number is determined by the actual scene.
303. The computer device performs image synthesis based on the noise map to generate a target image.
In this embodiment, the process of synthesizing the noise maps may be performed based on a preset function, where the preset function may be a remapping function (remap), that is, the first dimension maps are adjusted to different frequencies, and then the first dimension maps of different frequencies are overlaid with the second dimension map as a base to generate the target image. Specifically, the remap function can be implemented as follows:
flow remap (in flow value, in flow original _ min, in flow original _ max, in flow new _ min, in flow new _ max)// starting value range
{
return new _ min + ((value-original _ min)/(original _ max-original _ min)) ((new _ max-new _ min)); // target value range
}
The target image is generated by processing the corresponding channels with different dimensions, for example, the R channel is Perlin-Worley noise, the GBA channel is Worley noise with different frequencies, and then performing noise value fusion through remap.
With the above embodiment, it can be known that, by obtaining a shape map related to a target image, and then determining a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method and the device realize the image generation process of simulating the complex three-dimensional image based on a small number of maps, and improve the image generation efficiency because the material of image synthesis is obtained by the noise transformation of the shape maps, thereby not occupying a large amount of resources, not needing repeated map superposition operation.
The above embodiments describe the process of image generation, but since the details of the model are different in different scenes, some noise transformation may be performed to make the image more realistic, and the process is described below. Referring to fig. 8, fig. 8 is a flowchart of another image generation method according to an embodiment of the present disclosure, where the embodiment of the present disclosure at least includes the following steps:
801. a computer device obtains a three-dimensional model.
In this embodiment, the three-dimensional model is general model data, and the model data generally occupies a large space and needs to be mapped to a corresponding current interface scene for display. Particularly, in the display process of the mobile terminal, the processing capacity of the device is limited, and a three-dimensional model needs to be converted, namely, the model is restored under screen coordinates.
802. And the computer equipment carries out ray tracing to obtain the shape map.
In this embodiment, the process of ray tracing mainly involves a processing process of surface data (Iossurface) and volume data, and for the surface data, that is, a reflection process of a simulated ray, and rendering an image, as shown in fig. 9, it is a scene flow diagram of the ray tracing method provided in the embodiment of the present application; the figure comprises the following steps:
(1) And (5) performing light casting. For each pixel of the final image, a line of sight is taken through the model. At this stage, it is useful to consider the model intersected and enclosed in a boundary primitive, which is a simple geometric object used to intersect the line of sight with the model.
(2) And (6) sampling. Equidistant sampling points or samples are selected along the line-of-sight portion within the model. Typically, the model is not aligned with the line of sight, and the sample points are typically located between voxels. It is necessary to interpolate the values of the samples from the surrounding voxels.
(3) And determining the shading. For each sample point, the transfer function will retrieve the RGBA material color and calculate the gradient of the illumination values. The gradual change represents the direction of the local surface within the model. The sample is then shaded, i.e. colored and illuminated, according to its surface orientation and the position of the light source in the scene.
(4) And (4) synthesizing. After all the sample points have been rendered, they are synthesized along the line of sight to generate the final color value of the pixel currently being processed. It can work from beginning to end, i.e. the calculation starts with the sample furthest from the viewer and ends with the sample closest to the viewer. This workflow direction ensures that the masked portion of the model does not affect the generated pixels. The front-to-back order may increase computational efficiency because the remaining light energy decreases as the light travels away from the camera.
Specifically, the process of intersecting the above models can be performed with reference to the following equation:
f(x,y,z)=f(P)=0
the distance equation can be symbolized through the equation, namely when the equation is equal to 0, the equation is represented on the surface of the model, the equation is larger than 0 and is outside the model, and the equation is smaller than 0 and is inside the model.
In one possible scenario, where the model is a sphere, the corresponding equation is expressed as:
Figure GDA0004058800910000141
wherein x, y and z are coordinates of pixel points in the spherical model. I.e. representing the distance equation for one ball. Correspondingly, many basic cubic structures can be expressed by equations and viewed.
After the distance equation is available, the normal of the volume surface can be obtained, and is expressed by partial differentiation:
Figure GDA0004058800910000142
wherein x, y and z are pixel point coordinates in the model, namely three-dimensional data; after the three-dimensional data is available, the three-dimensional data can be mapped to a screen space.
According to the representation mode of the Iossurce, whether the screen space can be observed or not is needed to be determined, namely whether the ray where the pixel of the screen space is located intersects with the cube or not and is not blocked by other non-transparent objects. Then the occlusion culling problem is not considered first.
Firstly, the vector of the screen space needs to be converted into the vector of the world coordinate, the direction of the ray is simply converted by using the matrix of InvViewProjection, and of course, if the vector is converted in the pixel shader, the coordinate needs to be converted from the pixel coordinate to the coordinate of the clip space, and then the coordinate is converted into the world space. And (4) converting in a vertex shader, and then performing interpolation through hardware.
Optionally, since the pixel may be inside the model, at this time, the position of the pixel needs to be determined, as shown in fig. 10, which is a scene flowchart of another ray tracing method provided in this embodiment of the present application, a ray is shown to start from a point P0. Then, determining the distance to the closest point on the curved surface; then the sphere is unfolded around the P0 until the sphere is intersected with the curved surface; so that point P1 is the intersection between the ray and the sphere. The above process is further repeated to generate points P2 to P4. It can be seen from the figure that point P4 is actually an intersection point, i.e. at the model surface.
If the cube of the current pixel position is already inside the camera, i.e. it has been directly intersected, the color of that pixel is the color of the cube, and if it has not, it is necessary to find whether the pixel direction can intersect the cube surface. According to the circular search method shown in fig. 10, it can be quickly iterated to determine whether the intersection occurs, so as to obtain the color value of the model.
803. The computer device determines a noise map of the shape map in at least two dimensions according to preset rules.
804. The computer device performs image synthesis based on the noise map.
In this embodiment, steps 803 and 804 are similar to steps 302 and 303 shown in fig. 3, and the description of the related features may be referred to, which is not repeated herein.
805. The computer device performs image updates based on the detail map.
In this embodiment, in order to further improve the accuracy of the image, a detail map may be prepared. It is to be understood that the detail map may be additionally obtained or may be generated based on the shape map.
The following describes a process of generating a detail map based on a shape map. Firstly, acquiring the hierarchical characteristics of a shape chartlet; then, a detail map is obtained according to the level features, and the detail map is obtained by carrying out frequency adjustment on the basis of the noise map; and updating the target image based on the detail map. For example: the detail map is Worley noise of different frequencies on the R channel, the G channel and the a channel.
806. The computer device updates the image based on the rotation map.
In this embodiment, since the chaos of some models is high, a further noise addition process needs to be performed, for example, to simulate an image in which a cloud is blown around.
Specifically, the process of updating the image based on the rotation map may be performed with reference to the following formula:
Figure GDA0004058800910000151
wherein curl A is the transformed image; x, y and z are coordinates of the image before transformation; a. The x 、A x 、A x I, j, k are the coordinates of the reference point and the transformed image coordinates.
807. The computer device generates a target image.
By combining the embodiment, the computer equipment can perform the image generation process without redundant data preparation through optimizing the shape chartlet source, and the image generation efficiency is greatly improved because only one generation process is required; furthermore, the accuracy of the simulation of the complex model is improved through the optimization of details and the simulation of the rotation.
Next, the cloud image generation will be described as an example, in conjunction with the method of generating the image.
In one possible scenario, the simulation of the cloud image may also set some basic location parameters and cloud-related element parameters to enrich the detailed content of the cloud image, such as: the cloud motion speed, the cloud radius, the motion center coordinate, the cloud thickness, the cloud height, the slice distribution, the observation visual field range, the raindrop density and other parameters, and the specific parameter types are determined by actual scenes.
First, considering that the shape of the cloud is a blob, it can be modeled by the above Worley noise, i.e. the first dimension mapping, since Worley noise is characterized by the blob noise formed around a certain random point. For irregular disturbances, perlin noise can be used, that is, the characteristics of Perlin noise are sufficiently random in the second dimension mapping, so that the irregular disturbances can be formed. In addition, since the cloud is composed of large and small clusters, worley noise of different frequencies is used for synthesis, and finally the basic shape of the cloud is generated.
As shown in fig. 11, which is a scene schematic diagram of another image generation method provided in the embodiment of the present application, in the diagram, each channel of the shape map is: the second dimension map is a Perlin-Worley noise map for the R channel, and the first dimension map is Worley noise of different frequencies for GBA, respectively. The synthesis process is that Perlin-Worley noise is used as a base, a Remap function is used for controlling Worley noise with different frequencies, after the obtained results are obtained, the results are added in a certain proportion, and therefore the final result is obtained.
It can be understood that, through the above process, the basic cloud shape already exists, but there is no distribution of the cloud layer, and at this time, a distribution map, that is, a detail map, is used to distribute the cloud layer according to the distribution map, as shown in fig. 12, which is a scene schematic diagram of another image generation method provided in this embodiment of the present application, after the detail maps are superimposed, it is seen that the cloud is dispersed, and the features C1 and C2 generated by the flow distribution appear.
Furthermore, in order to add more details, the appearance of being blown disorderly by wind is simulated, and the noise of the rotation degree can be added, so that the appearance of being rolled up by the airflow is simulated by the cloud. As shown in fig. 13, which is a scene schematic diagram of another image generation method provided in the embodiment of the present application, a map of rotation noise is first obtained, where the rotation noise represents rotation deflection of three channels respectively. The values of the different channels of the vorticity noise are used to deflect the positions of the first dimension mapping samples, namely the different channels of the volume texture, and then a remap function is adopted to generate a target image, so that the appearance disturbed by the fluid is simulated.
Through the embodiment, the image generation method provided by the application can greatly improve the game quality through the simulation of the cloud sea, has high influence on the whole game environment, and can enable a player to be more immersed in the game.
In order to better implement the above-mentioned solution of the embodiments of the present application, the following also provides a related apparatus for implementing the above-mentioned solution. Referring to fig. 14, fig. 14 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure, in which an image generating apparatus 1400 includes:
an acquisition unit 1401 configured to acquire a shape map, which is related to a target image;
a determining unit 1402, configured to determine a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map;
a generating unit 1403, configured to perform image synthesis based on the noise map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to determine at least one first dimension map based on a shape feature of the shape map, where the shape feature is determined based on a pixel distribution of the shape map;
the determining unit 1402 is specifically configured to perform adjustment according to the first dimension map as a template to determine the second dimension map;
the generating unit 1403 is specifically configured to perform image synthesis according to the first dimension map and the second dimension map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to obtain a pixel grid of the shape map, where the pixel grid includes a plurality of pixels;
the determining unit 1402 is specifically configured to determine at least one feature point in the pixel grid;
the determining unit 1402 is specifically configured to distribute a noise value according to the distance between the feature point and the pixel to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the determining unit 1402 is further configured to determine an image type indicated by the shape map;
the determining unit 1402 is further configured to invert the noise value according to the image type to update the first dimension map.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to distribute noise values according to distances between the feature points and the pixels;
the determining unit 1402 is specifically configured to perform frequency adjustment on the noise value to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to invert the first dimension map;
the determining unit 1402 is specifically configured to process the inverted first dimension map according to fractal brownian motion;
the determining unit 1402 is specifically configured to intercept the processed noise value in the first dimension map based on a preset threshold to obtain the second dimension map.
Optionally, in some possible implementation manners of the present application, the generating unit 1403 is specifically configured to adjust the first dimension map to different frequencies based on a preset function;
the generating unit 1403 is specifically configured to superimpose the first dimension maps with different frequencies based on the second dimension map to generate the target image.
Optionally, in some possible implementation manners of the present application, the generating unit 1403 is further configured to obtain a hierarchical feature of the shape map;
the generating unit 1403 is further configured to obtain a detail map according to the level feature, where the detail map is obtained by performing frequency adjustment based on the noise map;
the generating unit 1403 is further configured to update the target image based on the detail map.
Optionally, in some possible implementation manners of the present application, the generating unit 1403 is further configured to perform rotation variation on the noise map to obtain a rotation map;
the generating unit 1403 is further configured to update the target image based on the curl map.
Optionally, in some possible implementations of the present application, the obtaining unit 1401 is specifically configured to obtain a three-dimensional shape model;
the obtaining unit 1401 is specifically configured to perform ray tracing on the three-dimensional shape model in a target interface space to obtain the shape map, where the target interface space is a space where the target image is located.
Optionally, in some possible implementations of the present application, the obtaining unit 1401 is specifically configured to obtain distance information between a light source and the three-dimensional shape model, where the light source is a reference point in the target interface space;
the obtaining unit 1401 is specifically configured to perform calculation of a distance field function based on the distance information to obtain light pixel distribution;
the obtaining unit 1401 is specifically configured to generate the shape map according to the light pixel distribution.
The method comprises the steps of obtaining a shape map related to a target image, and then determining a noise map of the shape map under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method and the device realize the image generation process of simulating the complex three-dimensional image based on a small number of maps, and improve the image generation efficiency because the material of image synthesis is obtained by the noise transformation of the shape maps, thereby not occupying a large amount of resources, not needing repeated map superposition operation.
An embodiment of the present application further provides a terminal device, as shown in fig. 15, which is a schematic structural diagram of another terminal device provided in the embodiment of the present application, and for convenience of description, only a portion related to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to a method portion in the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the terminal as the mobile phone as an example:
fig. 15 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 15, the cellular phone includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (WiFi) module 1570, processor 1580, and power supply 1590. Those skilled in the art will appreciate that the handset configuration shown in fig. 15 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each constituent component of the mobile phone with reference to fig. 15:
the RF circuit 1510 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1580; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), email, short Message Service (SMS), etc.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger, a stylus, etc., and a range of touch operations on the touch panel 1531 at intervals) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 can be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 1540 may include a display panel 1541, and optionally, the display panel 1541 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch operation is transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of the touch event. Although in fig. 15, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the mobile phone.
The handset can also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that turns off the display panel 1541 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing gestures of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometers and taps), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1560, speaker 1561, microphone 1562 may provide an audio interface between a user and a cell phone. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into an audio signal by the speaker 1561 and output the audio signal; on the other hand, the microphone 1562 converts collected sound signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are processed by the audio data output processor 1580 and then passed through the RF circuit 1510 for transmission to, for example, another cellular phone, or for output to the memory 1520 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a WiFi module 1570, and provides wireless broadband internet access for the user. Although a WiFi module 1570 is shown in fig. 15, it will be understood that it is not an essential component of the handset and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520. Optionally, the processor 1580 may include one or more processing units; optionally, the processor 1580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The mobile phone also includes a power source 1590 (e.g., a battery) for supplying power to various components, and optionally, the power source may be logically connected to the processor 1580 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In this embodiment, the processor 1580 included in the terminal further has a function of executing each step of the page processing method.
In an embodiment of the present application, a computer-readable storage medium is further provided, where the computer-readable storage medium stores image generation instructions, and when the computer-readable storage medium runs on a computer, the computer is caused to perform the steps performed by the image generation apparatus in the method described in the foregoing embodiments shown in fig. 3 to 13.
Also provided in embodiments of the present application is a computer program product including image generation instructions, which when run on a computer, causes the computer to perform the steps performed by the image generation apparatus in the method described in the foregoing embodiments shown in fig. 3 to 13.
The embodiment of the present application further provides an image generation system, and the image generation system may include the image generation apparatus in the embodiment described in fig. 14 or the terminal device described in fig. 15.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an image generating apparatus, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. A method of image generation, comprising:
acquiring a shape map, wherein the shape map is related to a target image;
acquiring a pixel grid of the shape map, wherein the pixel grid comprises a plurality of pixels;
determining at least one feature point in the pixel grid;
distributing noise values according to the distance between the characteristic points and the pixels to obtain at least one first dimension map;
inverting the first dimension map;
processing the inverted first dimension map according to fractal Brownian motion;
intercepting a noise value in the processed first dimension map based on a preset threshold value to obtain a second dimension map; the first dimension map and the second dimension map are noise maps of the shape map in at least two dimensions;
and carrying out image synthesis according to the first dimension map and the second dimension map so as to generate the target image.
2. The method of claim 1, further comprising:
determining an image type indicated by the shape map;
and inverting the noise value according to the image type to update the first dimension map.
3. The method of claim 1, wherein distributing noise values according to the distances between the feature points and the pixels to obtain at least one first dimension map comprises:
distributing noise values according to the distance between the characteristic points and the pixels;
and carrying out frequency adjustment on the noise value to obtain at least one first dimension map.
4. The method of claim 1, wherein the image synthesizing from the first dimension map and the second dimension map to generate the target image comprises:
adjusting the first dimension map to different frequencies based on a preset function;
and overlapping the first dimension maps with different frequencies by taking the second dimension map as a base to generate the target image.
5. The method of claim 1, further comprising:
acquiring the hierarchical characteristics of the shape map;
acquiring a detail map according to the level features, wherein the detail map is obtained by adjusting the frequency based on the noise map;
updating the target image based on the detail map.
6. The method of claim 1, further comprising:
performing rotation change on the noise map to obtain a rotation map;
and updating the target image based on the rotation map.
7. The method of claim 1, wherein the obtaining the shape map comprises:
acquiring a three-dimensional shape model;
and performing ray tracing on the three-dimensional shape model in a target interface space to obtain the shape chartlet, wherein the target interface space is a space where the target image is located.
8. The method of claim 7, wherein the ray tracing the three-dimensional shape model in a target interface space to obtain the shape map comprises:
obtaining distance information from a light source to the three-dimensional shape model, wherein the light source is a reference point in the target interface space;
performing a calculation of a distance field function based on the distance information to obtain a ray pixel distribution;
and generating the shape map according to the light pixel distribution.
9. The method of claim 1, wherein the shape map comprises a shape of a cloud, and wherein the target image is indicative of the shape of the cloud in three dimensions.
10. An apparatus for image generation, comprising:
an acquisition unit configured to acquire a shape map, the shape map being related to a target image;
a determining unit, configured to obtain a pixel grid of the shape map, where the pixel grid includes a plurality of pixels; determining at least one feature point in the pixel grid; distributing noise values according to the distance between the characteristic points and the pixels to obtain at least one first dimension map; inverting the first dimension map; processing the inverted first dimension map according to fractal Brownian motion; intercepting a noise value in the processed first dimension map based on a preset threshold value to obtain a second dimension map; the first dimension map and the second dimension map are noise maps of the shape map in at least two dimensions;
a generating unit, configured to perform image synthesis according to the first dimension map and the second dimension map to generate the target image.
11. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to perform the method of image generation of any of claims 1 to 9 according to instructions in the program code.
12. A computer readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of image generation of any of the preceding claims 1 to 9.
CN202010207706.0A 2020-03-23 2020-03-23 Image generation method and related device Active CN111445563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010207706.0A CN111445563B (en) 2020-03-23 2020-03-23 Image generation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010207706.0A CN111445563B (en) 2020-03-23 2020-03-23 Image generation method and related device

Publications (2)

Publication Number Publication Date
CN111445563A CN111445563A (en) 2020-07-24
CN111445563B true CN111445563B (en) 2023-03-10

Family

ID=71629413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010207706.0A Active CN111445563B (en) 2020-03-23 2020-03-23 Image generation method and related device

Country Status (1)

Country Link
CN (1) CN111445563B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419465A (en) * 2020-12-09 2021-02-26 网易(杭州)网络有限公司 Rendering method and device of virtual model
CN113240578A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image special effect generation method and device, electronic equipment and storage medium
CN114339448B (en) * 2021-12-31 2024-02-13 深圳万兴软件有限公司 Method and device for manufacturing special effects of beam video, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107682731A (en) * 2017-10-24 2018-02-09 北京奇虎科技有限公司 Video data distortion processing method, device, computing device and storage medium
CN108295467A (en) * 2018-02-06 2018-07-20 网易(杭州)网络有限公司 Rendering method, device and the storage medium of image, processor and terminal
CN109427083A (en) * 2017-08-17 2019-03-05 腾讯科技(深圳)有限公司 Display methods, device, terminal and the storage medium of three-dimensional avatars
CN109949386A (en) * 2019-03-07 2019-06-28 北京旷视科技有限公司 A kind of Method for Texture Image Synthesis and device
CN110648274A (en) * 2019-09-23 2020-01-03 阿里巴巴集团控股有限公司 Fisheye image generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109427083A (en) * 2017-08-17 2019-03-05 腾讯科技(深圳)有限公司 Display methods, device, terminal and the storage medium of three-dimensional avatars
CN107682731A (en) * 2017-10-24 2018-02-09 北京奇虎科技有限公司 Video data distortion processing method, device, computing device and storage medium
CN108295467A (en) * 2018-02-06 2018-07-20 网易(杭州)网络有限公司 Rendering method, device and the storage medium of image, processor and terminal
CN109949386A (en) * 2019-03-07 2019-06-28 北京旷视科技有限公司 A kind of Method for Texture Image Synthesis and device
CN110648274A (en) * 2019-09-23 2020-01-03 阿里巴巴集团控股有限公司 Fisheye image generation method and device

Also Published As

Publication number Publication date
CN111445563A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
US11498003B2 (en) Image rendering method, device, and storage medium
CN111292405B (en) Image rendering method and related device
CN111445563B (en) Image generation method and related device
CN112037311B (en) Animation generation method, animation playing method and related devices
CN106547599B (en) Method and terminal for dynamically loading resources
WO2016173427A1 (en) Method, device and computer readable medium for creating motion blur effect
CN109725956B (en) Scene rendering method and related device
CN112245926B (en) Virtual terrain rendering method, device, equipment and medium
CN109753892B (en) Face wrinkle generation method and device, computer storage medium and terminal
WO2023231537A1 (en) Topographic image rendering method and apparatus, device, computer readable storage medium and computer program product
CN114565708A (en) Method, device and equipment for selecting anti-aliasing algorithm and readable storage medium
CN110517346B (en) Virtual environment interface display method and device, computer equipment and storage medium
CN111445568B (en) Character expression editing method, device, computer storage medium and terminal
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN116672706B (en) Illumination rendering method, device, terminal and storage medium
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN116704107B (en) Image rendering method and related device
CN117618893A (en) Scene special effect processing method and device, electronic equipment and storage medium
CN115588066A (en) Rendering method and device of virtual object, computer equipment and storage medium
CN115359162A (en) Method and device for realizing oblique photography model transparent mask based on Cesium and electronic equipment
CN117893668A (en) Virtual scene processing method and device, computer equipment and storage medium
CN117839216A (en) Model conversion method and device, electronic equipment and storage medium
CN117582661A (en) Virtual model rendering method, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40026167

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant