CN111445563A - Image generation method and related device - Google Patents
Image generation method and related device Download PDFInfo
- Publication number
- CN111445563A CN111445563A CN202010207706.0A CN202010207706A CN111445563A CN 111445563 A CN111445563 A CN 111445563A CN 202010207706 A CN202010207706 A CN 202010207706A CN 111445563 A CN111445563 A CN 111445563A
- Authority
- CN
- China
- Prior art keywords
- map
- shape
- noise
- dimension
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 123
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 29
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 27
- 230000006870 function Effects 0.000 claims description 27
- 238000009826 distribution Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 18
- 238000003860 storage Methods 0.000 claims description 11
- 230000005653 Brownian motion process Effects 0.000 claims description 6
- 238000005537 brownian motion Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 43
- 239000000463 material Substances 0.000 abstract description 13
- 230000009466 transformation Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 23
- 238000013507 mapping Methods 0.000 description 11
- 238000004088 simulation Methods 0.000 description 10
- 235000019587 texture Nutrition 0.000 description 8
- 230000001413 cellular effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000005381 potential energy Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001788 irregular Effects 0.000 description 3
- 239000000779 smoke Substances 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 235000019580 granularity Nutrition 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The application discloses a method and a related device for generating an image, wherein a shape map related to a target image is obtained, and then a noise map of the shape map under at least two dimensions is determined according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image generation method and a related apparatus.
Background
With the development of the related technologies of mobile terminals, people have higher and higher requirements on images. In a game, the display of a complex three-dimensional model needs to be realized, and the image analog conversion of the complex three-dimensional model needs to be carried out; for example, simulation of clouds, i.e., based on the formation and shape of the cloud in the physical world, the cloud is simulated using an appropriate noise model.
In general, the display of complex three-dimensional models can be performed using a stack of a large number of model maps, for example: for the simulation of clouds, 5 to 400 semi-transparent model map material per cloud is used. When the model mapping material is rendered, the model mapping material always faces the camera, and then a three-dimensional cloud image is generated in a back-to-front superposition mode.
However, due to the superposition of a large number of semi-transparent maps, various different types of maps are required, and the memory occupation is large; in the process of gradual superposition, a large amount of redundant and complicated operations are needed, so that the efficiency of the image generation process is low.
Disclosure of Invention
In view of this, the present application provides an image generation method, which can effectively avoid the low image generation efficiency caused by processing a large number of maps, and improve the efficiency of the image generation process.
A first aspect of the present application provides an image generating method, which may be applied to a system or a program including an image generating function in a terminal device, and specifically includes: acquiring a shape map, wherein the shape map is related to a target image;
determining noise maps of the shape maps under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape maps, and the color channel information corresponds to the noise maps;
and performing image synthesis based on the noise map to generate the target image.
Optionally, in some possible implementation manners of the present application, the determining the noise maps of the shape map in at least two dimensions according to a preset rule includes:
determining at least one of the first dimension maps based on shape features of the shape map, the shape features determined based on a pixel distribution of the shape map;
adjusting according to the first dimension map as a template to determine the second dimension map;
the image synthesizing based on the noise map to generate the target image comprises:
and carrying out image synthesis according to the first dimension map and the second dimension map so as to generate the target image.
Optionally, in some possible implementations of the present application, the determining at least one first dimension map based on shape features of the shape map includes:
acquiring a pixel grid of the shape map, wherein the pixel grid comprises a plurality of pixels;
determining at least one feature point in the pixel grid;
and distributing noise values according to the distance between the characteristic points and the pixels to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the method further includes:
determining an image type indicated by the shape map;
and inverting the noise value according to the image type to update the first dimension map.
Optionally, in some possible implementations of the present application, the distributing noise values according to the distances between the feature points and the pixels to obtain at least one first dimension map includes:
distributing noise values according to the distance between the characteristic points and the pixels;
and carrying out frequency adjustment on the noise value to obtain at least one first dimension map.
Optionally, in some possible implementation manners of the present application, the adjusting according to the first dimension map as a template to determine the second dimension map includes:
inverting the first dimension map;
processing the inverted first dimension map according to fractal Brownian motion;
and intercepting the noise value in the processed first dimension map based on a preset threshold value to obtain the second dimension map.
Optionally, in some possible implementations of the present application, the performing image synthesis according to the first dimension map and the second dimension map to generate the target image includes:
adjusting the first dimension map to different frequencies based on a preset function;
and overlapping the first dimension maps with different frequencies by taking the second dimension map as a base to generate the target image.
Optionally, in some possible implementations of the present application, the method further includes:
acquiring the hierarchical characteristics of the shape map;
acquiring a detail map according to the level features, wherein the detail map is obtained by adjusting the frequency based on the noise map;
updating the target image based on the detail map.
Optionally, in some possible implementations of the present application, the method further includes:
performing rotation variation on the noise map to obtain a rotation map;
and updating the target image based on the rotation map.
Optionally, in some possible implementations of the present application, the obtaining the shape map includes:
acquiring a three-dimensional shape model;
and performing ray tracing on the three-dimensional shape model in a target interface space to obtain the shape chartlet, wherein the target interface space is a space where the target image is located.
Optionally, in some possible implementations of the present application, the performing ray tracing on the three-dimensional shape model in a target interface space to obtain the shape map includes:
obtaining distance information from a light source to the three-dimensional shape model, wherein the light source is a reference point in the target interface space;
performing a calculation of a distance field function based on the distance information to obtain a light pixel distribution;
and generating the shape map according to the light pixel distribution.
Optionally, in some possible implementations of the present application, the shape map includes a shape of a cloud, and the target image is used to indicate the shape of the cloud in three dimensions.
A second aspect of the present application provides an apparatus for image generation, comprising: an acquisition unit configured to acquire a shape map, the shape map being related to a target image;
a determining unit, configured to determine a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map;
a generating unit configured to perform image synthesis based on the noise map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to determine at least one first dimension map based on a shape feature of the shape map, where the shape feature is determined based on a pixel distribution of the shape map;
the determining unit is specifically configured to perform adjustment according to the first dimension map as a template to determine the second dimension map;
the generating unit is specifically configured to perform image synthesis according to the first dimension map and the second dimension map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to obtain a pixel grid of the shape map, where the pixel grid includes a plurality of pixels;
the determining unit is specifically configured to determine at least one feature point in the pixel grid;
the determining unit is specifically configured to obtain at least one first dimension map according to a distance distribution noise value between the feature point and the pixel.
Optionally, in some possible implementations of the present application, the determining unit is further configured to determine an image type indicated by the shape map;
the determining unit is further configured to invert the noise value according to the image type to update the first dimension map.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to distribute noise values according to distances between the feature points and the pixels;
the determining unit is specifically configured to perform frequency adjustment on the noise value to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to invert the first dimension map;
the determining unit is specifically configured to process the inverted first dimension map according to fractal brownian motion;
the determining unit is specifically configured to intercept the processed noise value in the first dimension map based on a preset threshold to obtain the second dimension map.
Optionally, in some possible implementation manners of the present application, the generating unit is specifically configured to adjust the first dimension map to different frequencies based on a preset function;
the generating unit is specifically configured to superimpose the first dimension maps with different frequencies based on the second dimension map to generate the target image.
Optionally, in some possible implementation manners of the present application, the generating unit is further configured to obtain a hierarchical feature of the shape map;
the generation unit is further used for acquiring a detail map according to the level features, and the detail map is obtained by adjusting the frequency based on the noise map;
the generating unit is further configured to update the target image based on the detail map.
Optionally, in some possible implementation manners of the present application, the generating unit is further configured to perform rotation variation on the noise map to obtain a rotation map;
the generating unit is further configured to update the target image based on the rotation map.
Optionally, in some possible implementation manners of the present application, the obtaining unit is specifically configured to obtain a three-dimensional shape model;
the obtaining unit is specifically configured to perform ray tracing on the three-dimensional shape model in a target interface space to obtain the shape map, where the target interface space is a space where the target image is located.
Optionally, in some possible implementation manners of the present application, the obtaining unit is specifically configured to obtain distance information between a light source and the three-dimensional shape model, where the light source is a reference point in the target interface space;
the obtaining unit is specifically configured to perform calculation of a distance field function based on the distance information to obtain light pixel distribution;
the obtaining unit is specifically configured to generate the shape map according to the light pixel distribution.
A third aspect of the present application provides a computer device comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is configured to perform the method of image generation according to any of the above first aspect or the first aspect according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of image generation of the first aspect or any of the first aspects described above.
According to the technical scheme, the embodiment of the application has the following advantages:
the method comprises the steps of obtaining a shape map related to a target image, and then determining a noise map of the shape map under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram of a network architecture in which an image generation system operates;
fig. 2 is a flowchart of image generation according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for image generation according to an embodiment of the present disclosure;
fig. 4 is a scene schematic diagram of an image generation method according to an embodiment of the present application;
fig. 5 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 6 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 7 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
FIG. 8 is a flow chart of another method of image generation provided by embodiments of the present application;
FIG. 9 is a scene flowchart of a ray tracing method according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating another exemplary method for ray tracing according to an embodiment of the present disclosure;
fig. 11 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 12 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 13 is a scene schematic diagram of another image generation method provided in the embodiment of the present application;
fig. 14 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image generation method and a related device, which can be applied to a system or a program containing an image generation function in terminal equipment, wherein a shape map related to a target image is obtained, and then a noise map of the shape map under at least two dimensions is determined according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some nouns that may appear in the embodiments of the present application are explained.
Noise mapping: a map generated by performing a targeted process on a certain color channel of an image, for example: and calculating Worley noise of the R channel of the original mapping chart.
Color channel: i.e., RGBA channels, are channels representing Red (Red), Green (Green), Blue (Blue), and transparency (Alpha), respectively.
Worley noise: a noise conversion method based on point transformation is also called cell noise.
Perlin noise: defining a plurality of vertexes, wherein each vertex contains a random gradient vector, the vertexes can generate potential energy influence on surrounding coordinates according to the gradient vector of the vertexes, the potential energy is higher along the ascending of the gradient direction of the vertexes, and further when an output value of a certain coordinate is required to be obtained, the potential energy caused by each vertex nearby the coordinate needs to be superposed, so that a noise generation method for outputting the total potential energy is obtained, and the noise generation method is mainly used for irregular disturbance.
Perlin-Worley noise: the application provides a method for carrying out Worley noise processing based on Perlin noise.
Fractal brownian motion: the method is used for improving the fluidity of the image by superposing noise in a loop and reducing the amplitude of the noise by a certain ratio while continuously increasing the frequency by a certain multiple.
It should be understood that the image generation method provided by the present application may be applied to a system or a program including an image generation function in a terminal device, for example, an image rendering platform, specifically, the image generation system may operate in a network architecture as shown in fig. 1, which is a network architecture diagram operated by the image generation system as shown in fig. 1, as can be seen from the figure, the image generation system may provide image generation with a plurality of information sources, the terminal establishes a connection with a server through a network, and then receives a plurality of chartlet contents sent by the server, and then the terminal performs simulated rendering on a model in a scene according to the image generation method in the present application, so as to display the model; it is understood that fig. 1 shows various terminal devices, in an actual scene, there may be more or fewer types of terminal devices participating in the image generation process, the specific number and type depend on the actual scene, and this is not limited herein, and in addition, fig. 1 shows one server, but in an actual scene, there may also be participation of multiple servers, especially in a scene of multi-content application interaction, the specific number of servers depends on the actual scene.
It should be noted that the image generation method provided in this embodiment may also be performed offline, that is, without the participation of a server, at this time, the terminal is connected with other terminals locally, and then the process of image generation between terminals is performed.
It is understood that the image generation system described above may be run on a personal mobile terminal, such as: the application as a game platform can also run on a server and can also run on a third-party device to provide image generation so as to obtain an image generation processing result of an information source; the specific image generation system may be operated in the above-mentioned device in the form of a program, may also be operated as a system component in the above-mentioned device, and may also be used as one of cloud service programs, and the specific operation mode is determined by an actual scene, which is not limited herein.
With the development of the related technologies of mobile terminals, people have higher and higher requirements on images. In a game, the display of a complex three-dimensional model needs to be realized, and the image analog conversion of the complex three-dimensional model needs to be carried out; for example, simulation of clouds, i.e., based on the formation and shape of the cloud in the physical world, the cloud is simulated using an appropriate noise model.
In general, the display of complex three-dimensional models can be performed using a stack of a large number of model maps, for example: for the simulation of clouds, 5 to 400 semi-transparent model map material per cloud is used. When the model mapping material is rendered, the model mapping material always faces the camera, and then a three-dimensional cloud image is generated in a back-to-front superposition mode.
However, due to the superposition of a large number of semi-transparent maps, various different types of maps are required, and the memory occupation is large; in the process of gradual superposition, a large amount of redundant and complicated operations are needed, so that the efficiency of the image generation process is low.
In order to solve the above problem, the present application proposes an image generation method, which is applied to the flow framework of image generation shown in fig. 2, and as shown in fig. 2, for the flow framework of image generation provided in the embodiment of the present application, first, a relevant basic shape map is obtained from a server, then, different noise processing is performed based on the shape maps to obtain a plurality of noise maps, and then, the noise maps are superimposed to generate a target image.
It is understood that the method provided by the present application may be a program written as a processing logic in a hardware system, or may be an image generating apparatus, and the processing logic is implemented in an integrated or external manner. As an implementation manner, the image generation apparatus obtains a shape map associated with a target image, and then determines a noise map of the shape map in at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
With reference to the above flow architecture, the following describes an image generation method in the present application, please refer to fig. 3, where fig. 3 is a flow chart of an image generation method provided in an embodiment of the present application, and the embodiment of the present application at least includes the following steps:
301. a computer device obtains a shape map.
In this embodiment, the shape map is associated with the target image, for example: and if the target image is a three-dimensional cloud image, the shape map is a map containing the basic shape of the cloud, and the map is displayed based on the coordinates of the current display interface.
It will be appreciated that the shape map may be sent by a server, stored by the computer device itself, or transformed by a three-dimensional model, such as: and carrying out ray tracing on the three-dimensional model so as to obtain image display under the current interface.
302. The computer device determines a noise map of the shape map in at least two dimensions according to preset rules.
In this embodiment, the preset rule is determined based on color channel information of the shape map, the color channel information corresponding to the noise map; wherein, the color channel includes R channel, G channel, B channel and a channel, and the noise map may be a noise process performed on a map under the formulated channel, for example: worley noise processing is adopted for the R channel, and in an actual scene, different color channels are processed according to the requirements of the model, for example: the first dimension mapping is performed for a G channel, a B channel, and an a channel, the second dimension mapping is performed for an R channel, and the specific color channel selection is determined by an actual scene, which is not limited herein.
Next, a preset rule for dividing different noise maps will be described. Since three-dimensional images generally include background and foreground portions, in some model rendering, the background and foreground are blended with each other, for example: during the drawing of the cloud, the foreground is a protruding cloud, and the background is a fuzzy element containing a cloud shape. Therefore, in this embodiment, simulation generation may be performed on the complex model in which the background and the foreground interact, that is, noise generation in at least two dimensions may be performed, and the specific action may correspond to a model simulating cloud, smoke, or the like.
Specifically, at least one first dimension map is first determined based on shape features of the shape map, where the first dimension map, i.e. used to indicate texture features of the image, may also be determined based on pixel distribution of the shape map, since general texture features appear based on pixel distribution, for example: in the image simulation of smoke, the pixel value of the center of smoke is higher than the pixel values of the surroundings. Further, to ensure the integrity of the display of the texture features, a plurality of first dimension maps can be determined based on the shape features of the shape maps, thereby improving the accuracy of the image. In one possible scenario, the first dimension map may be the G channel processed through Worley noise.
After the first dimension map is determined, adjusting according to the first dimension map as a template to determine a second dimension map, namely loading the texture of the model on a corresponding background; and then carrying out image synthesis according to the first dimension map and the second dimension map so as to generate a target image.
Optionally, the determination of the shape feature may be performed based on pixel distribution, and specifically, as shown in fig. 4, is a scene schematic diagram of an image generation method provided in the embodiment of the present application; pixel cell a1 and feature point a2 are indicated, and first a pixel cell a1 of a shape map is obtained, the pixel cell including a plurality of pixels; then determining at least one characteristic point A2 in the pixel grid; and then distributing noise values according to the distance between the characteristic points and the pixels to obtain a first dimension map. I.e., centered on the characteristic point a2, a gradual noise change is made outward, thereby exhibiting a convex effect as shown in fig. 4.
It will be appreciated that the appearance of features due to some models may be less noisy, for example: in the cloud model, the more white the cloud is raised toward the center, the noise value can be inverted. Specifically, the image type indicated by the shape map is determined first; the noise value is then inverted according to the image type to update the first dimension map, for example: and generating a cloud model. Fig. 5 is a schematic view of a scene of another image generation method provided in the embodiment of the present application; the graph indicates that the noise value is inverted for the map in fig. 4, and the noise value of the feature point B1 in the map is low, and appears in a white state, so that the features of the cloud can be better simulated.
Optionally, since the first dimension map is to reflect texture features of the image, different texture features may be obtained if different methods are used to process the shape map, but a plurality of different processing methods may affect the efficiency of image generation, and at this time, frequency conversion may be performed based on the same method, so as to simulate texture features of different granularities. Specifically, noise values are first distributed according to the distance between the feature point and the pixel, such as gradually increasing, decreasing or some specific frequency; and carrying out frequency adjustment on the noise value to obtain at least one first dimension map. As shown in fig. 6, the scene diagram of another image generation method provided in the embodiment of the present application is shown, where an image in the diagram is a shape map subjected to Worley noise processing, and the frequencies of the shape map are sequentially increased from left to right, so that details of a model can be simulated well.
Optionally, since the second dimension map reflects a background of the model, and the background of the model may be associated with a foreground, generating the second dimension map based on the first dimension map may adopt a process shown in fig. 7, as shown in fig. 7, which is a scene schematic diagram of another image generation method provided in the embodiment of the present application; the figure shows that the first dimension map (Worley noise) is first inverted; then, in order to improve the fluidity of the background, the reversed first dimension map can be processed according to fractal Brownian motion; then, based on a preset threshold value, intercepting a noise value in the processed first dimension map to obtain a second dimension map, namely Perlin-Worley noise; thereby obtaining a background map with higher fusion degree with the foreground.
In a possible scenario, the target image may include images of a plurality of different types of models, and at this time, the images may be generated by using the processing manners of different dimensions in the above embodiments, where the specific number is determined by the actual scenario.
303. The computer device performs image synthesis based on the noise map to generate a target image.
In this embodiment, the process of synthesizing the noise maps may be performed based on a preset function, where the preset function may be a remapping function (remap), that is, the first dimension maps are adjusted to different frequencies, and then the first dimension maps of different frequencies are overlaid with the second dimension map as a base to generate the target image. Specifically, the remap function can be implemented as follows:
flow remap (in flow value, in flow original _ min, in flow original _ max, in flow new _ min, in flow new _ max)// starting value range
{
return new _ min + ((value-original _ min)/(original _ max-original _ min)) ((new _ max-new _ min)); // target value range
}
The target image is generated by processing the corresponding channels with different dimensions, for example, the R channel is Perlin-Worley noise, the GBA channel is Worley noise with different frequencies, and then performing noise value fusion through remap.
With the above embodiment, it can be known that, by obtaining a shape map related to a target image, and then determining a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
The above embodiments describe the process of image generation, but since the details of the model are different in different scenes, some noise transformation may be performed to make the image more realistic, and the process is described below. Referring to fig. 8, fig. 8 is a flowchart of another image generation method according to an embodiment of the present disclosure, where the embodiment of the present disclosure at least includes the following steps:
801. a computer device obtains a three-dimensional model.
In this embodiment, the three-dimensional model is general model data, and the model data generally occupies a large space and needs to be mapped to a corresponding current interface scene for display. Particularly, in the display process of the mobile terminal, the processing capacity of the device is limited, and a three-dimensional model needs to be converted, namely, the model is restored under screen coordinates.
802. And the computer equipment carries out ray tracing to obtain the shape map.
In this embodiment, the process of ray tracing mainly involves a processing process of surface data (Iossurface) and volume data, and for the surface data, that is, a reflection process of a simulated ray, and rendering an image, as shown in fig. 9, it is a scene flow diagram of the ray tracing method provided in the embodiment of the present application; the figure includes the following steps:
(1) and (5) performing light casting. For each pixel of the final image, a line of sight is taken through the model. At this stage, it is useful to consider the model intersected and enclosed in a boundary primitive, which is a simple geometric object used to intersect the line of sight with the model.
(2) And (6) sampling. Equidistant sampling points or samples are selected along the line-of-sight portion within the model. Typically, the model is not aligned with the line of sight, and the sample points are typically located between voxels. It is necessary to interpolate the values of the samples from the surrounding voxels.
(3) And determining the shading. For each sample point, the transfer function will retrieve the RGBA material color and calculate the gradient of the illumination values. The gradient represents the direction of the local surface within the model. The sample is then shaded, i.e. colored and illuminated, according to its surface orientation and the position of the light source in the scene.
(4) And (4) synthesizing. After all the sample points have been rendered, they are synthesized along the line of sight to generate the final color value of the pixel currently being processed. It can work from beginning to end, i.e. the calculation starts with the sample furthest from the viewer and ends with the sample closest to the viewer. This workflow direction ensures that the masked portion of the model does not affect the generated pixels. The front-to-back order may improve computational efficiency because the remaining light energy decreases as the light travels away from the camera.
Specifically, the process of intersecting the models can be performed by referring to the following equations:
f(x,y,z)=f(P)=0
the distance equation can be symbolized through the equation, namely when the equation is equal to 0, the equation is represented on the surface of the model, the equation is larger than 0 and is outside the model, and the equation is smaller than 0 and is inside the model.
In one possible scenario, where the model is a sphere, the corresponding equation is expressed as:
wherein x, y and z are coordinates of pixel points in the spherical model. I.e. representing the distance equation for one ball. Correspondingly, many basic cubic structures can be expressed by equations and can be viewed.
After the distance equation is available, normal of the volume surface can be found, expressed as partial differential:
wherein x, y and z are pixel point coordinates in the model, namely three-dimensional data; after the three-dimensional data is available, the three-dimensional data can be mapped to a screen space.
According to the representation mode of the Iossurce, whether the screen space can be observed or not is needed to be determined, namely whether the ray where the pixel of the screen space is located intersects with the cube or not and is not blocked by other non-transparent objects. Then the occlusion culling problem is not considered first.
Firstly, the vector of the screen space needs to be converted into the vector of the world coordinate, the direction of the ray is simply converted by using the matrix of InvViewProjection, and of course, if the vector is converted in the pixel shader, the coordinate needs to be converted from the pixel coordinate to the coordinate of the clip space, and then the coordinate is converted into the world space. And (4) converting in a vertex shader, and then performing interpolation through hardware.
Optionally, since the pixel may be inside the model, the position of the pixel needs to be determined at this time, as shown in fig. 10, which is a flowchart of another ray tracing method provided in the embodiment of the present application, a ray is shown starting from the point P0. Then, determining the distance to the closest point on the curved surface; the sphere is then expanded around P0 until it intersects the curved surface; point P1 is thus the intersection between the ray and the sphere. The above process is further repeated to generate points P2 through P4. It can be seen from the figure that point P4 is actually an intersection point, i.e. at the model surface.
If the cube of the current pixel position is already inside the camera, i.e. it has been directly intersected, the color of that pixel is the color of the cube, and if it has not, it is necessary to find whether the pixel direction can intersect the cube surface. According to the circular search method shown in fig. 10, whether the color values intersect can be quickly determined in an iterative manner, so that the color values of the model can be obtained.
803. The computer device determines a noise map of the shape map in at least two dimensions according to preset rules.
804. The computer device performs image synthesis based on the noise map.
In this embodiment, steps 803 and 804 are similar to steps 302 and 303 shown in fig. 3, and the description of the related features may be referred to, which is not repeated herein.
805. The computer device performs image updates based on the detail map.
In this embodiment, in order to further improve the accuracy of the image, a detail map may be prepared. It is understood that the detail map may be additionally obtained or may be generated based on the shape map.
Next, a process of generating a detail map based on the shape map will be described. Firstly, acquiring the hierarchical characteristics of a shape map; then, a detail map is obtained according to the level features, and the detail map is obtained by carrying out frequency adjustment on the basis of the noise map; and updating the target image based on the detail map. For example: the detail map is Worley noise of different frequencies on the R channel, the G channel and the a channel.
806. The computer device performs image updates based on the curl maps.
In this embodiment, since the chaos of some models is high, a further noise addition process needs to be performed, for example, to simulate an image in which a cloud is blown around.
Specifically, the process of updating the image based on the rotation map may be performed with reference to the following formula:
wherein curl A is the transformed image; x, y and z are coordinates of the image before transformation; a. thex、Ax、AxI, j, k are the coordinates of the reference point and the transformed image coordinates.
807. The computer device generates a target image.
By combining the embodiment, the computer equipment can perform the image generation process without redundant data preparation through optimizing the shape chartlet source, and the image generation efficiency is greatly improved because only one generation process is required; furthermore, the accuracy of the simulation of the complex model is improved through the optimization of details and the simulation of the rotation.
Next, the cloud image generation will be described as an example, in conjunction with the above-described image generation method.
In one possible scenario, the simulation of the cloud image may also set some basic location parameters and cloud-related element parameters to enrich the detailed content of the cloud image, such as: the cloud motion speed, the cloud radius, the motion center coordinate, the cloud thickness, the cloud height, the slice distribution, the observation visual field range, the raindrop density and other parameters, and the specific parameter types are determined by actual scenes.
First, considering that the shape of the cloud is a blob, it can be modeled by the above Worley noise, i.e. the first dimension mapping, since Worley noise is characterized by the blob noise formed around a certain random point. For irregular disturbances, Perlin noise can be used, that is, the characteristics of Perlin noise are sufficiently random in the second dimension mapping, so that the irregular disturbances can be formed. In addition, since the cloud is composed of large and small clusters, Worley noise of different frequencies is used for synthesis, and finally the basic shape of the cloud is generated.
As shown in fig. 11, which is a scene schematic diagram of another image generation method provided in the embodiment of the present application, in the diagram, each channel of the shape map is: the second dimension map is a Perlin-Worley noise map for the R channel, and the first dimension map is a Worley noise map for GBA with different frequencies respectively. The synthesis process is that Perlin-Worley noise is used as a base, a Remap function is used for controlling Worley noise with different frequencies, after the obtained results are obtained, the results are added in a certain proportion, and therefore the final result is obtained.
It can be understood that, through the above process, the basic cloud shape already exists, but there is no cloud layer distribution, and at this time, a distribution map, that is, a detail map, is used to distribute the cloud layers according to the distribution map, as shown in fig. 12, which is a scene diagram of another image generation method provided in this embodiment of the present application, after the detail maps are superimposed, the cloud is seen to spread, and the feature C1 and the feature C2 generated by the flow distribution appear.
Further, in order to add more details, the appearance of being blown by wind is simulated, and the noise of the rotation degree can be added, so that the cloud simulates the appearance of being rolled up by the airflow. As shown in fig. 13, which is a scene diagram of another image generation method provided in this embodiment of the present application, a map of rotation noise is first obtained, where the rotation noise represents rotation deflection of three channels respectively. The values of the different channels of the vorticity noise are used to deflect the positions of the first dimension mapping samples, namely the different channels of the volume texture, and then a remap function is adopted to generate a target image, so that the appearance disturbed by the fluid is simulated.
Through the embodiment, the image generation method provided by the application can greatly improve the game quality through the simulation of the cloud sea, has high influence on the whole game environment, and can enable a player to be more immersed in the game.
In order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects. Referring to fig. 14, fig. 14 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure, in which an image generating apparatus 1400 includes:
an acquisition unit 1401 configured to acquire a shape map, which is related to a target image;
a determining unit 1402, configured to determine a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map;
a generating unit 1403, configured to perform image synthesis based on the noise map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to determine at least one first dimension map based on a shape feature of the shape map, where the shape feature is determined based on a pixel distribution of the shape map;
the determining unit 1402 is specifically configured to perform adjustment according to the first dimension map as a template to determine the second dimension map;
the generating unit 1403 is specifically configured to perform image synthesis according to the first dimension map and the second dimension map to generate the target image.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to obtain a pixel grid of the shape map, where the pixel grid includes a plurality of pixels;
the determining unit 1402 is specifically configured to determine at least one feature point in the pixel grid;
the determining unit 1402 is specifically configured to distribute a noise value according to the distance between the feature point and the pixel to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the determining unit 1402 is further configured to determine an image type indicated by the shape map;
the determining unit 1402 is further configured to invert the noise value according to the image type to update the first dimension map.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to distribute noise values according to distances between the feature points and the pixels;
the determining unit 1402 is specifically configured to perform frequency adjustment on the noise value to obtain at least one first dimension map.
Optionally, in some possible implementations of the present application, the determining unit 1402 is specifically configured to invert the first dimension map;
the determining unit 1402 is specifically configured to process the inverted first dimension map according to fractal brownian motion;
the determining unit 1402 is specifically configured to intercept the processed noise value in the first dimension map based on a preset threshold to obtain the second dimension map.
Optionally, in some possible implementations of the present application, the generating unit 1403 is specifically configured to adjust the first dimension map to different frequencies based on a preset function;
the generating unit 1403 is specifically configured to superimpose the first dimension maps with different frequencies based on the second dimension map to generate the target image.
Optionally, in some possible implementation manners of the present application, the generating unit 1403 is further configured to obtain a hierarchical feature of the shape map;
the generating unit 1403 is further configured to obtain a detail map according to the level feature, where the detail map is obtained by performing frequency adjustment based on the noise map;
the generating unit 1403 is further configured to update the target image based on the detail map.
Optionally, in some possible implementation manners of the present application, the generating unit 1403 is further configured to perform rotation variation on the noise map to obtain a rotation map;
the generating unit 1403 is further configured to update the target image based on the rotation map.
Optionally, in some possible implementations of the present application, the obtaining unit 1401 is specifically configured to obtain a three-dimensional shape model;
the obtaining unit 1401 is specifically configured to perform ray tracing on the three-dimensional shape model in a target interface space to obtain the shape map, where the target interface space is a space where the target image is located.
Optionally, in some possible implementations of the present application, the obtaining unit 1401 is specifically configured to obtain distance information between a light source and the three-dimensional shape model, where the light source is a reference point in the target interface space;
the obtaining unit 1401 is specifically configured to perform calculation of a distance field function based on the distance information to obtain light pixel distribution;
the obtaining unit 1401 is specifically configured to generate the shape map according to the light pixel distribution.
The method comprises the steps of obtaining a shape map related to a target image, and then determining a noise map of the shape map under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map; and then image synthesis is performed based on the noise map to generate a target image. The method realizes the image generation process of simulating the complex three-dimensional image based on a small number of maps, and the image synthesis material is obtained by the noise transformation of the shape maps, so that a large amount of resources are not required to be occupied, repeated map superposition operation is not required, and the image generation efficiency is improved.
An embodiment of the present application further provides a terminal device, as shown in fig. 15, which is a schematic structural diagram of another terminal device provided in the embodiment of the present application, and for convenience of description, only a portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to a method portion in the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the terminal as the mobile phone as an example:
fig. 15 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 15, the cellular phone includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (WiFi) module 1570, processor 1580, and power supply 1590. Those skilled in the art will appreciate that the handset configuration shown in fig. 15 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 15:
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger, a stylus, etc., and a range of touch operations on the touch panel 1531 at intervals) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be configured to display information input by a user or information provided to the user and various menus of the mobile phone, the display unit 1540 may include a display panel 1541, optionally, the display panel 1541 may be configured in the form of a liquid crystal display (L CD), an organic light-emitting diode (O L ED), and the like, further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch panel 1531 may be transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 may provide a corresponding visual output on the display panel 1541 according to the type of the touch event.
The handset can also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that turns off the display panel 1541 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a WiFi module 1570, and provides wireless broadband internet access for the user. Although fig. 15 shows WiFi module 1570, it is understood that it does not belong to the essential constitution of the handset and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the mobile phone. Optionally, the processor 1580 may include one or more processing units; optionally, the processor 1580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The mobile phone also includes a power source 1590 (e.g., a battery) for supplying power to various components, and optionally, the power source may be logically connected to the processor 1580 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment, the processor 1580 included in the terminal further has a function of executing each step of the page processing method.
An embodiment of the present application further provides a computer-readable storage medium, in which image generation instructions are stored, and when the computer-readable storage medium is executed on a computer, the computer is caused to execute the steps performed by the image generation apparatus in the method described in the foregoing embodiments shown in fig. 3 to 13.
Also provided in embodiments of the present application is a computer program product including image generation instructions, which when run on a computer, causes the computer to perform the steps performed by the image generation apparatus in the method described in the foregoing embodiments shown in fig. 3 to 13.
The embodiment of the present application further provides an image generation system, and the image generation system may include the image generation apparatus in the embodiment described in fig. 14 or the terminal device described in fig. 15.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an image generating apparatus, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (15)
1. A method of image generation, comprising:
acquiring a shape map, wherein the shape map is related to a target image;
determining noise maps of the shape maps under at least two dimensions according to a preset rule, wherein the preset rule is determined based on color channel information of the shape maps, and the color channel information corresponds to the noise maps;
and performing image synthesis based on the noise map to generate the target image.
2. The method according to claim 1, wherein the noise map comprises a first dimension map and a second dimension map, and wherein the determining the noise map of the shape map in at least two dimensions according to a preset rule comprises:
determining at least one of the first dimension maps based on shape features of the shape map, the shape features determined based on a pixel distribution of the shape map;
adjusting according to the first dimension map as a template to determine the second dimension map;
the image synthesizing based on the noise map to generate the target image comprises:
and carrying out image synthesis according to the first dimension map and the second dimension map so as to generate the target image.
3. The method of claim 2, wherein said determining at least one of said first dimension maps based on shape features of said shape map comprises:
acquiring a pixel grid of the shape map, wherein the pixel grid comprises a plurality of pixels;
determining at least one feature point in the pixel grid;
and distributing noise values according to the distance between the characteristic points and the pixels to obtain at least one first dimension map.
4. The method of claim 3, further comprising:
determining an image type indicated by the shape map;
and inverting the noise value according to the image type to update the first dimension map.
5. The method of claim 3, wherein distributing noise values according to the distances between the feature points and the pixels to obtain at least one first dimension map comprises:
distributing noise values according to the distance between the characteristic points and the pixels;
and carrying out frequency adjustment on the noise value to obtain at least one first dimension map.
6. The method of claim 2, wherein the adjusting according to the first dimension map as a template to determine the second dimension map comprises:
inverting the first dimension map;
processing the inverted first dimension map according to fractal Brownian motion;
and intercepting the noise value in the processed first dimension map based on a preset threshold value to obtain the second dimension map.
7. The method of claim 2, wherein the image compositing from the first dimension map and the second dimension map to generate the target image comprises:
adjusting the first dimension map to different frequencies based on a preset function;
and overlapping the first dimension maps with different frequencies by taking the second dimension map as a base to generate the target image.
8. The method of claim 1, further comprising:
acquiring the hierarchical characteristics of the shape map;
acquiring a detail map according to the level features, wherein the detail map is obtained by adjusting the frequency based on the noise map;
updating the target image based on the detail map.
9. The method of claim 1, further comprising:
performing rotation variation on the noise map to obtain a rotation map;
and updating the target image based on the rotation map.
10. The method of claim 1, wherein the obtaining the shape map comprises:
acquiring a three-dimensional shape model;
and performing ray tracing on the three-dimensional shape model in a target interface space to obtain the shape chartlet, wherein the target interface space is a space where the target image is located.
11. The method of claim 10, wherein the ray tracing the three-dimensional shape model in a target interface space to obtain the shape map comprises:
obtaining distance information from a light source to the three-dimensional shape model, wherein the light source is a reference point in the target interface space;
performing a calculation of a distance field function based on the distance information to obtain a light pixel distribution;
and generating the shape map according to the light pixel distribution.
12. The method of claim 1, wherein the shape map comprises a shape of a cloud, and wherein the target image is indicative of the shape of the cloud in three dimensions.
13. An apparatus for image generation, comprising:
an acquisition unit configured to acquire a shape map, the shape map being related to a target image;
a determining unit, configured to determine a noise map of the shape map in at least two dimensions according to a preset rule, where the preset rule is determined based on color channel information of the shape map, and the color channel information corresponds to the noise map;
a generating unit configured to perform image synthesis based on the noise map to generate the target image.
14. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to perform the method of image generation of any of claims 1 to 12 according to instructions in the program code.
15. A computer readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of image generation of any of the preceding claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010207706.0A CN111445563B (en) | 2020-03-23 | 2020-03-23 | Image generation method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010207706.0A CN111445563B (en) | 2020-03-23 | 2020-03-23 | Image generation method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111445563A true CN111445563A (en) | 2020-07-24 |
CN111445563B CN111445563B (en) | 2023-03-10 |
Family
ID=71629413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010207706.0A Active CN111445563B (en) | 2020-03-23 | 2020-03-23 | Image generation method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111445563B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419465A (en) * | 2020-12-09 | 2021-02-26 | 网易(杭州)网络有限公司 | Rendering method and device of virtual model |
CN113240578A (en) * | 2021-05-13 | 2021-08-10 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN113509731A (en) * | 2021-05-19 | 2021-10-19 | 网易(杭州)网络有限公司 | Fluid model processing method and device, electronic equipment and storage medium |
CN114339448A (en) * | 2021-12-31 | 2022-04-12 | 深圳万兴软件有限公司 | Method and device for manufacturing light beam video special effect, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107682731A (en) * | 2017-10-24 | 2018-02-09 | 北京奇虎科技有限公司 | Video data distortion processing method, device, computing device and storage medium |
CN108295467A (en) * | 2018-02-06 | 2018-07-20 | 网易(杭州)网络有限公司 | Rendering method, device and the storage medium of image, processor and terminal |
CN109427083A (en) * | 2017-08-17 | 2019-03-05 | 腾讯科技(深圳)有限公司 | Display methods, device, terminal and the storage medium of three-dimensional avatars |
CN109949386A (en) * | 2019-03-07 | 2019-06-28 | 北京旷视科技有限公司 | A kind of Method for Texture Image Synthesis and device |
CN110648274A (en) * | 2019-09-23 | 2020-01-03 | 阿里巴巴集团控股有限公司 | Fisheye image generation method and device |
-
2020
- 2020-03-23 CN CN202010207706.0A patent/CN111445563B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109427083A (en) * | 2017-08-17 | 2019-03-05 | 腾讯科技(深圳)有限公司 | Display methods, device, terminal and the storage medium of three-dimensional avatars |
CN107682731A (en) * | 2017-10-24 | 2018-02-09 | 北京奇虎科技有限公司 | Video data distortion processing method, device, computing device and storage medium |
CN108295467A (en) * | 2018-02-06 | 2018-07-20 | 网易(杭州)网络有限公司 | Rendering method, device and the storage medium of image, processor and terminal |
CN109949386A (en) * | 2019-03-07 | 2019-06-28 | 北京旷视科技有限公司 | A kind of Method for Texture Image Synthesis and device |
CN110648274A (en) * | 2019-09-23 | 2020-01-03 | 阿里巴巴集团控股有限公司 | Fisheye image generation method and device |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419465A (en) * | 2020-12-09 | 2021-02-26 | 网易(杭州)网络有限公司 | Rendering method and device of virtual model |
CN112419465B (en) * | 2020-12-09 | 2024-05-28 | 网易(杭州)网络有限公司 | Virtual model rendering method and device |
CN113240578A (en) * | 2021-05-13 | 2021-08-10 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN113240578B (en) * | 2021-05-13 | 2024-05-21 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN113509731A (en) * | 2021-05-19 | 2021-10-19 | 网易(杭州)网络有限公司 | Fluid model processing method and device, electronic equipment and storage medium |
CN113509731B (en) * | 2021-05-19 | 2024-06-04 | 网易(杭州)网络有限公司 | Fluid model processing method and device, electronic equipment and storage medium |
CN114339448A (en) * | 2021-12-31 | 2022-04-12 | 深圳万兴软件有限公司 | Method and device for manufacturing light beam video special effect, computer equipment and storage medium |
CN114339448B (en) * | 2021-12-31 | 2024-02-13 | 深圳万兴软件有限公司 | Method and device for manufacturing special effects of beam video, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111445563B (en) | 2023-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11498003B2 (en) | Image rendering method, device, and storage medium | |
CN111445563B (en) | Image generation method and related device | |
CN112037311B (en) | Animation generation method, animation playing method and related devices | |
WO2020125785A1 (en) | Hair rendering method, device, electronic apparatus, and storage medium | |
CN112802172B (en) | Texture mapping method and device for three-dimensional model, storage medium and computer equipment | |
CN112245926B (en) | Virtual terrain rendering method, device, equipment and medium | |
WO2023231537A1 (en) | Topographic image rendering method and apparatus, device, computer readable storage medium and computer program product | |
CN109725956A (en) | A kind of method and relevant apparatus of scene rendering | |
CN113398583A (en) | Applique rendering method and device of game model, storage medium and electronic equipment | |
CN112206517A (en) | Rendering method, device, storage medium and computer equipment | |
CN110517346B (en) | Virtual environment interface display method and device, computer equipment and storage medium | |
CN112206519B (en) | Method, device, storage medium and computer equipment for realizing game scene environment change | |
CN118135081A (en) | Model generation method, device, computer equipment and computer readable storage medium | |
CN116672706B (en) | Illumination rendering method, device, terminal and storage medium | |
CN117893668A (en) | Virtual scene processing method and device, computer equipment and storage medium | |
CN112950753B (en) | Virtual plant display method, device, equipment and storage medium | |
CN117582661A (en) | Virtual model rendering method, device, medium and equipment | |
WO2024130737A1 (en) | Image rendering method and apparatus, and electronic device | |
CN115588066A (en) | Rendering method and device of virtual object, computer equipment and storage medium | |
CN115880402A (en) | Flow animation generation method and device, electronic equipment and readable storage medium | |
CN116310038A (en) | Model rendering method, device, electronic equipment and computer readable storage medium | |
CN115222867A (en) | Overlap detection method, overlap detection device, electronic equipment and storage medium | |
CN115035231A (en) | Shadow baking method, shadow baking device, electronic apparatus, and storage medium | |
CN114797109A (en) | Object editing method and device, electronic equipment and storage medium | |
CN116777731A (en) | Method, apparatus, device, medium and program product for soft rasterization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40026167 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |