CN115018968A - Image rendering method and device, storage medium and electronic equipment - Google Patents

Image rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115018968A
CN115018968A CN202210652062.5A CN202210652062A CN115018968A CN 115018968 A CN115018968 A CN 115018968A CN 202210652062 A CN202210652062 A CN 202210652062A CN 115018968 A CN115018968 A CN 115018968A
Authority
CN
China
Prior art keywords
rendering
noise
target
sampling
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210652062.5A
Other languages
Chinese (zh)
Inventor
许一栋
田宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Granular Shanghai Information Technology Co ltd
Original Assignee
Granular Shanghai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Granular Shanghai Information Technology Co ltd filed Critical Granular Shanghai Information Technology Co ltd
Priority to CN202210652062.5A priority Critical patent/CN115018968A/en
Publication of CN115018968A publication Critical patent/CN115018968A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70

Abstract

The disclosure relates to an image rendering method, an image rendering device, a storage medium and an electronic device. The method comprises the following steps: determining a target noise image; determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling; sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples; and rendering the image according to the noise sample to obtain a rendered target image. Therefore, the target noise image is used as the texture for rendering, the texture randomness is improved, the rendering sense can be improved, the problem of image amplification distortion caused by insufficient resolution of the fixed texture mapping can be effectively solved, meanwhile, the deformation which accords with the target rendering material can be synchronized into the rendering model during rendering, the deformation of the rendering model is caused, and the improvement of the rendering sense and the stereoscopic impression of the rendering result is facilitated.

Description

Image rendering method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image rendering method and apparatus, a storage medium, and an electronic device.
Background
In scenes such as movie special effects, games, AR (Augmented Reality), VR (Virtual Reality), etc., images and animations with realistic sensation, such as dynamic effects simulating water, smoke, cloud, etc., are generally required to be generated by a rendering engine, but strict physical computation (e.g., fluid simulation) has high computational complexity and cannot generally run in real time. In the related art, the solution usually adopted is to use static texture maps to perform actions such as translation, rotation, superposition and the like in cooperation with UV (i.e., U, V texture map coordinates) animation to approximately simulate the corresponding dynamic effect. However, in the above method, the texture period repetition generated by the UV animation is significant, and does not conform to the randomness of fluid motion such as water, smoke, and cloud, and the static texture map has a fixed resolution, and has the problem of image magnification distortion, and the texture map is generally attached to the surface of the rendering grid, and cannot cause the spatial geometric change of the grid, resulting in poor reality of the generated image and animation, and poor simulation effect.
Disclosure of Invention
The present disclosure provides an image rendering method, an image rendering device, a storage medium, and an electronic apparatus, which are beneficial to improving stereoscopic impression and reality of image rendering.
In order to achieve the above object, according to a first aspect of the present disclosure, there is provided an image rendering method including:
determining a target noise image;
determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples;
and rendering the image according to the noise sample to obtain a rendered target image.
Optionally, the determining the target noise image includes:
generating an initial noise image in response to identifying the image rendering request;
and carrying out continuous processing on the initial noise image to obtain the target noise image.
Optionally, the generating an initial noise image comprises:
generating a noise grid, and generating a random number at each node in the noise grid to obtain the initial noise image;
the performing continuous processing on the initial noise image to obtain the target noise image includes:
and carrying out interpolation processing on the non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
Optionally, the determining a target sampling mode for the target noise image according to a target rendering material to be rendered includes:
and determining a sampling mode corresponding to the target rendering material as the target sampling mode according to a predefined corresponding relation between the rendering material and the sampling mode, wherein the sampling parameter at least comprises a sampling frequency.
Optionally, the performing image rendering according to the noise sample to obtain a rendered target image includes:
determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
determining a noise superposition mode corresponding to the target rendering material as a target superposition mode according to a predefined corresponding relation between the rendering material and the noise superposition mode, wherein the target superposition mode is used for indicating a superposition sequence and/or a superposition direction aiming at noise sampling;
according to the target superposition mode, sequentially superposing the noise samples to the grid vertexes of the rendering model to obtain rendering parameters of the rendering model;
and rendering the image according to the rendering parameters to obtain the target image.
Optionally, the target superposition mode includes a first noise superposition mode for model mesh vertices in the rendering model and a second noise superposition mode for pixel points in the rendering model;
the sequentially superposing the noise samples to the mesh vertexes of the rendering model according to the target superposition mode to obtain the rendering parameters of the rendering model includes:
according to the first noise superposition mode, superposing the noise samples indicated by the first noise superposition mode to model mesh vertexes in the rendering model to obtain a first superposition result, wherein the first superposition result is at least used for indicating a first position of each model mesh vertex in the rendering model;
according to the second noise superposition mode, superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result to obtain a second superposition result, wherein the second superposition result is at least used for indicating the second positions of the pixel points in the rendering model;
performing illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameters.
Optionally, the target rendering material is water, the first noise superposition mode is used for simulating a large wave of water, and the second noise superposition mode is used for simulating a large wave and a small wave of water.
According to a second aspect of the present disclosure, there is provided an image rendering apparatus, the apparatus comprising:
a first determination module for determining a target noise image;
the second determination module is used for determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
the sampling module is used for sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples;
and the rendering module is used for rendering the image according to the noise samples to obtain a rendered target image.
Optionally, the first determining module includes:
a generation submodule for generating an initial noise image in response to identifying the image rendering request;
and the processing submodule is used for carrying out continuous processing on the initial noise image to obtain the target noise image.
Optionally, the generation submodule is configured to generate a noise grid, and generate a random number at each node in the noise grid to obtain the initial noise image;
and the processing submodule is used for carrying out interpolation processing on a non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
Optionally, the second determining module is configured to determine, according to a predefined correspondence between a rendering material and a sampling manner, a sampling manner corresponding to the target rendering material as the target sampling manner, where the sampling parameter at least includes a sampling frequency.
Optionally, the rendering module includes:
a first determining submodule for determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
the second determining submodule is used for determining a noise superposition mode corresponding to the target rendering material as a target superposition mode according to a predefined corresponding relation between the rendering material and the noise superposition mode, and the target superposition mode is used for indicating the superposition sequence and/or the superposition direction aiming at the noise sampling;
the first superposition submodule is used for sequentially superposing the noise samples to the grid vertexes of the rendering model according to the target superposition mode so as to obtain rendering parameters of the rendering model;
and the rendering submodule is used for rendering the image according to the rendering parameters to obtain the target image.
Optionally, the target superposition mode includes a first noise superposition mode for model mesh vertices in the rendering model and a second noise superposition mode for pixel points in the rendering model;
the first superposition sub-module comprises:
the second superposition submodule is used for superposing the noise samples indicated by the first noise superposition mode to model mesh vertexes in the rendering model according to the first noise superposition mode to obtain a first superposition result, and the first superposition result is at least used for indicating the first position of each model mesh vertex in the rendering model;
a third superposition submodule, configured to superpose the noise sample indicated by the second noise superposition mode to the pixel point in the first superposition result according to the second noise superposition mode, so as to obtain a second superposition result, where the second superposition result is at least used to indicate a second position of the pixel point in the rendering model;
the calculation submodule is used for carrying out illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and the third determining submodule is used for taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameter.
Optionally, the target rendering material is water, the first noise superposition mode is used for simulating a large wave of water, and the second noise superposition mode is used for simulating a large wave and a small wave of water.
According to a third aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
According to the technical scheme, the target noise image is determined, the target sampling mode aiming at the target noise image is determined according to the target rendering material to be rendered, wherein the target sampling mode comprises the sampling times and the sampling parameters of each sampling, the target noise image is sampled for multiple times according to the sampling times and the sampling parameters of each sampling to obtain the noise sampling, and the image rendering is carried out according to the noise sampling to obtain the rendered target image. Therefore, the target noise image is used as the texture for rendering in a mode of generating the target noise image, so that the randomness of the texture is improved, the rendered image is not monotone repeated or translated in a texture mapping process any more, and the rendering sense of reality can be improved. And by means of sampling from the target noise image, the problem of image amplification distortion caused by insufficient resolution of the fixed texture mapping can be effectively avoided. Meanwhile, the noise sampled according to the target rendering material can synchronize the deformation according with the target rendering material to the rendering model when the image is rendered, so that the deformation of the rendering model is caused, and the reality and the stereoscopic impression of the rendering result are favorably improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram of an image rendering method provided in accordance with one embodiment of the present disclosure;
FIG. 2 is a block diagram of an image rendering apparatus provided in accordance with one embodiment of the present disclosure;
FIG. 3 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flowchart of an image rendering method provided according to an embodiment of the present disclosure. As shown in fig. 1, the method may include steps 11 to 14.
In step 11, a target noise image is determined.
The target noise image may be obtained using currently common noise generation techniques. Meanwhile, since the material (e.g., water, smoke, cloud) in nature is not completely disordered although it is random, the target noise image should have continuity in order to improve the sense of realism of image rendering. Illustratively, the target noise image may be obtained using a value noise (value noise), gradient noise (gradient noise), perlin noise (berlin noise), or the like.
In one possible embodiment, step 11 may comprise the steps of:
generating an initial noise image in response to identifying the image rendering request;
and carrying out continuous processing on the initial noise image to obtain a target noise image.
In order to improve the reality of image rendering, the target noise image used in the current rendering may be generated in real time when the image rendering request is recognized, that is, each time the image rendering request is recognized, a noise image is generated by using a certain algorithm as the target noise image in the current rendering.
Illustratively, generating an initial noise image may be accomplished by:
a noise grid is generated, and a random number is generated at each node in the noise grid to obtain an initial noise image.
Typically, an initial noise grid (i.e., lattice) may be generated. The structure of the noise grid may be set according to actual requirements, for example, if in two-dimensional image rendering, a two-dimensional grid may be generated as the noise grid, and if in three-dimensional image rendering, a three-dimensional grid may be generated as the noise grid.
After the noise grid is generated, pseudo-random numbers can be generated at each grid node of the noise grid, and an initial noise image can be obtained. The initial noise image is a snowflake image displayed by a television set similar to the old, and the image frequency is high, randomness is strong, and continuity is not provided, and the initial noise image cannot be directly used as a target noise image.
After the initial noise image is obtained, the initial noise image may be processed by using a continuous processing method to obtain a continuous target noise image. The target noise images with different characteristics can be obtained by different serialization processing manners, and therefore, the serialization processing manners can be set specifically according to actual requirements, which is not limited by the disclosure.
Illustratively, the initial noise image is subjected to a serialization process to obtain the target noise image, which may be implemented as follows:
and based on the random number corresponding to the node in the initial noise image, carrying out interpolation processing on the non-node part in the noise grid to obtain a target noise image.
After the initial noise image is obtained, the initial noise image may be subjected to a continuous processing by way of interpolation processing, so as to obtain a target noise image. The target noise images with different characteristics can be obtained by selecting different interpolation methods, and which interpolation method is used can be freely selected according to actual requirements, which is not limited by the disclosure. For example, a smoother target noise image can be obtained by using a cubic spline interpolation method.
In step 12, a target sampling mode for the target noise image is determined according to the target rendering material to be rendered.
The target noise image is a fixed noise image, and multiple sampling overlaps can be performed based on the target noise image in order to obtain a richer noise morphology for rendering.
Therefore, for the target rendering material to be rendered, the target sampling mode for the target noise image can be determined for later-stage sampling superposition. The target sampling mode may include the number of sampling times and sampling parameters of each sampling.
By way of example, the sampling parameters may include, but are not limited to, sampling frequency, sampling location, and the like. For example, the target noise image may be sampled a plurality of times, and the sampling positions may be shifted according to a certain rule. In a possible embodiment, a spectrum synthesis mode may be adopted, that is, the frequency of the current sampling is 2 times of the frequency of the previous sampling, and the intensity of the sampling result is attenuated by half.
Different materials and different structures are different, and accordingly, in the rendering process, different rendering materials can be distinguished by using noise, so that for different rendering materials, the noise using mode needs to be set specifically to add noise data harmoniously.
For example, for water materials, if the water wave is composed of waves with different frequencies, the wave with high frequency is represented by a small wave (e.g., ripple), has small amplitude and frequent direction change, and the wave with low frequency is represented by a large wave (e.g., wave), has large amplitude and basically has no direction change. Based on the characteristic of the water material, a sampling mode corresponding to the water material can be adaptively set, so that the sampled data can restore the large wave and the small wave of the water. For example, the sampling mode may be set to sample N times, and M times of sampling (for restoring a large wave) are performed at a frequency lower than a preset frequency, and N-M times of sampling (for restoring a small wave) are performed at a frequency higher than the preset frequency, wherein M, N are positive integers, and M < N.
In one possible embodiment, step 12 may include the following steps:
and determining a sampling mode corresponding to the target rendering material as a target sampling mode according to the predefined corresponding relation between the rendering material and the sampling mode.
Wherein the sampling parameters may comprise at least a sampling frequency, as described above.
As described above, the sampling mode of the rendering material can be set according to the characteristics of the rendering material itself. Based on this, a corresponding relationship between the rendering material and the sampling manner may be further formed, for example, if the rendering material has water, smoke, and cloud, the three may respectively correspond to one sampling manner. Therefore, based on the target rendering material, the sampling mode corresponding to the target rendering material can be obtained from the corresponding relation and used as the target sampling mode.
In step 13, the target noise image is sampled for a plurality of times according to the sampling times and the sampling parameters of each sampling, so as to obtain noise samples.
After the target sampling mode is determined, the target noise image can be sampled for multiple times according to the sampling times indicated by the target sampling mode and the sampling parameters of each sampling, so that noise samples obtained by each sampling are obtained.
In step 14, image rendering is performed according to the noise samples, so as to obtain a rendered target image.
In one possible embodiment, step 14 may include the steps of:
determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
determining a noise superposition mode corresponding to a target rendering material as a target superposition mode according to a predefined corresponding relation between the rendering material and the noise superposition mode;
sequentially overlapping the noise samples to the grid vertexes of the rendering model according to a target overlapping mode to obtain rendering parameters of the rendering model;
and rendering the image according to the rendering parameters to obtain a target image.
The target superposition mode may be used to indicate a superposition order and/or a superposition direction for the noise samples. For example, the superimposition order may be determined according to a sampling frequency corresponding to the noise samples (e.g., setting the superimposition order to be an order from a low frequency to a high frequency). As another example, the stacking direction may be a fixed direction, or vary with frequency (e.g., rotate with increasing frequency).
Image rendering is usually completed based on a rendering model, the rendering model is formed by a plurality of model grids mesh, and the model grids can be triangular surface patches, quadrangular surface patches and the like. In the present disclosure, in order to improve rendering accuracy, a model mesh of a triangle is generally used. The rendering model may vary according to the rendering scene, for example, if the rendering scene is a two-dimensional image, the rendering model is a two-dimensional model, and if the rendering scene is a three-dimensional image, the rendering model is a three-dimensional model.
In one possible implementation, the target superposition mode may include a first noise superposition mode for vertices of a model mesh in the rendering model and a second noise superposition mode for pixels in the rendering model. Correspondingly, according to the target superposition mode, sequentially superposing the noise samples on the mesh vertices of the rendering model to obtain rendering parameters of the rendering model, which may include the following steps:
according to a first noise superposition mode, superposing noise samples indicated by the first noise superposition mode to model mesh vertexes in a rendering model to obtain a first superposition result;
according to a second noise superposition mode, superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result to obtain a second superposition result;
performing illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and taking the second position and the pixel value of the pixel point in the rendering model as rendering parameters.
The first noise superposition mode is specific to each vertex of the model mesh in the rendering model, and the part of the non-vertex is not considered, so that the processed first superposition result is relatively rough, and after the processing according to the first noise superposition mode, the processing is further performed according to a second noise superposition mode, wherein the second noise superposition mode is specific to each pixel point in the rendering model, not only contains the vertex part, but also contains the non-vertex part, so that the processed second superposition result is more detailed, and the rendering accuracy and the rendering authenticity are facilitated.
The first overlay result may be obtained in a vertex shader, and the second overlay result and the result of the illumination calculation on the second overlay result may be obtained in a fragment shader (or a fragment shader).
And the vertex shader acquires the position coordinates of the model mesh vertex from the rendering model, and superimposes the corresponding noise samples to the model mesh vertex in a first noise superimposition mode to obtain a first superimposition result. Wherein the first superposition result is at least used for indicating a first position of each model mesh vertex in the rendering model.
And the fragment shader acquires the first position of each model mesh vertex and the position coordinates of the pixel points near the vertex from the first superposition result, and superposes the corresponding noise samples on the model mesh vertex and the pixel points near the model mesh vertex in a second noise superposition mode to obtain a second superposition result. And the second superposition result is at least used for indicating the second position of the pixel points (the model mesh vertexes and the pixel points nearby) in the rendering model.
For example, if the target rendering material is water, as can be seen from the above, the water has two forms, i.e., a large wave and a small wave, and therefore, the first noise superposition manner may be used to simulate the large wave of the water, and the second noise superposition manner may be used to simulate the large wave and the small wave of the water. That is to say, the large wave is superimposed on the model mesh vertex based on the rendering model, and then the large wave and the small wave are further superimposed on the model mesh vertex of the rendering model and the adjacent pixel points thereof, which is beneficial to improving the reality and naturalness of the rendering result.
Then, the fragment shader may perform illumination calculation based on the second superposition result to obtain a pixel value of a pixel point in the rendering model. For example, the illumination calculation may use a PBR (physical-Based Rendering) or Blinn-Phong illumination model, etc.
Based on the steps, second positions of a plurality of pixel points in the rendering model and pixel values corresponding to the pixel points are obtained, so that the data can be used as rendering parameters required by image rendering, a rendering engine can conveniently render images based on the rendering parameters, and a rendered target image is obtained.
According to the technical scheme, the target noise image is determined, the target sampling mode aiming at the target noise image is determined according to the target rendering material to be rendered, wherein the target sampling mode comprises the sampling times and the sampling parameters of each sampling, the target noise image is sampled for multiple times according to the sampling times and the sampling parameters of each sampling to obtain the noise sampling, and the image rendering is carried out according to the noise sampling to obtain the rendered target image. Therefore, the target noise image is used as the texture for rendering in a mode of generating the target noise image, so that the randomness of the texture is improved, the rendered image is not monotone repeated or translated in a texture mapping process any more, and the rendering sense of reality can be improved. And by means of sampling from the target noise image, the problem of image amplification distortion caused by insufficient resolution of the fixed texture mapping can be effectively avoided. Meanwhile, the sampled noise aiming at the target rendering material can synchronize the deformation meeting the target rendering material to the rendering model when the image is rendered, so that the deformation of the rendering model is caused, and the reality and the stereoscopic impression of the rendering result are favorably improved.
Fig. 2 is a block diagram of an image rendering apparatus provided according to an embodiment of the present disclosure. As shown in fig. 2, the apparatus 20 includes:
a first determination module 21 for determining a target noise image;
a second determining module 22, configured to determine, according to a target rendering material to be rendered, a target sampling manner for the target noise image, where the target sampling manner includes sampling times and sampling parameters of each sampling;
the sampling module 23 is configured to perform multiple sampling on the target noise image according to the sampling times and sampling parameters of each sampling to obtain noise samples;
and the rendering module 24 is configured to perform image rendering according to the noise samples to obtain a rendered target image.
Optionally, the first determining module 21 includes:
a generation sub-module for generating an initial noise image in response to identifying the image rendering request;
and the processing submodule is used for carrying out continuous processing on the initial noise image to obtain the target noise image.
Optionally, the generating sub-module is configured to generate a noise grid, and generate a random number at each node in the noise grid to obtain the initial noise image;
and the processing submodule is used for carrying out interpolation processing on a non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
Optionally, the second determining module 22 is configured to determine, according to a predefined correspondence between a rendering material and a sampling manner, a sampling manner corresponding to the target rendering material as the target sampling manner, where the sampling parameter at least includes a sampling frequency.
Optionally, the rendering module 24 includes:
a first determining submodule for determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
the second determining submodule is used for determining a noise superposition mode corresponding to the target rendering material as a target superposition mode according to a predefined corresponding relation between the rendering material and the noise superposition mode, and the target superposition mode is used for indicating the superposition sequence and/or the superposition direction aiming at the noise sampling;
the first superposition submodule is used for sequentially superposing the noise samples to the grid vertexes of the rendering model according to the target superposition mode so as to obtain rendering parameters of the rendering model;
and the rendering submodule is used for rendering the image according to the rendering parameters to obtain the target image.
Optionally, the target superposition mode includes a first noise superposition mode for model mesh vertices in the rendering model and a second noise superposition mode for pixel points in the rendering model;
the first stacking submodule includes:
the second superposition submodule is used for superposing the noise samples indicated by the first noise superposition mode to model mesh vertexes in the rendering model according to the first noise superposition mode to obtain a first superposition result, and the first superposition result is at least used for indicating the first position of each model mesh vertex in the rendering model;
a third superposition submodule, configured to superpose the noise sample indicated by the second noise superposition mode to the pixel point in the first superposition result according to the second noise superposition mode, so as to obtain a second superposition result, where the second superposition result is at least used to indicate a second position of the pixel point in the rendering model;
the calculation submodule is used for carrying out illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and the third determining submodule is used for taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameter.
Optionally, the target rendering material is water, the first noise superposition mode is used for simulating a large wave of water, and the second noise superposition mode is used for simulating a large wave and a small wave of water.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Fig. 3 is a block diagram illustrating an electronic device 700 according to an example embodiment. As shown in fig. 3, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the image rendering method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, and the like. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 705 may thus comprise: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the image rendering method described above.
In another exemplary embodiment, there is also provided a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image rendering method described above. For example, the computer readable storage medium may be the memory 702 described above including program instructions executable by the processor 701 of the electronic device 700 to perform the image rendering method described above.
The preferred embodiments of the present disclosure are described in detail above with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details in the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the foregoing embodiments may be combined in any suitable manner without contradiction. To avoid unnecessary repetition, the disclosure does not separately describe various possible combinations.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method of image rendering, the method comprising:
determining a target noise image;
determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples;
and rendering the image according to the noise sample to obtain a rendered target image.
2. The method of claim 1, wherein determining a target noise image comprises:
in response to identifying the image rendering request, generating an initial noise image;
and carrying out continuous processing on the initial noise image to obtain the target noise image.
3. The method of claim 2, wherein generating an initial noise image comprises:
generating a noise grid, and generating a random number at each node in the noise grid to obtain the initial noise image;
the performing continuous processing on the initial noise image to obtain the target noise image includes:
and carrying out interpolation processing on the non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
4. The method of claim 1, wherein the determining a target sampling mode for the target noise image according to a target rendering material to be rendered comprises:
and determining a sampling mode corresponding to the target rendering material as the target sampling mode according to a predefined corresponding relation between the rendering material and the sampling mode, wherein the sampling parameter at least comprises a sampling frequency.
5. The method of claim 1, wherein the rendering the image according to the noise sample to obtain a rendered target image comprises:
determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
determining a noise superposition mode corresponding to the target rendering material as a target superposition mode according to a predefined corresponding relation between the rendering material and the noise superposition mode, wherein the target superposition mode is used for indicating a superposition sequence and/or a superposition direction aiming at noise sampling;
according to the target superposition mode, sequentially superposing the noise samples to the grid vertexes of the rendering model to obtain rendering parameters of the rendering model;
and rendering the image according to the rendering parameters to obtain the target image.
6. The method of claim 5, wherein the target stacking manner comprises a first noise stacking manner for model mesh vertices in the rendering model and a second noise stacking manner for pixel points in the rendering model;
the sequentially superposing the noise samples to the mesh vertexes of the rendering model according to the target superposition mode to obtain the rendering parameters of the rendering model includes:
according to the first noise superposition mode, superposing the noise samples indicated by the first noise superposition mode to model mesh vertexes in the rendering model to obtain a first superposition result, wherein the first superposition result is at least used for indicating a first position of each model mesh vertex in the rendering model;
according to the second noise superposition mode, superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result to obtain a second superposition result, wherein the second superposition result is at least used for indicating the second positions of the pixel points in the rendering model;
performing illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameters.
7. The method of claim 6, wherein the target rendering material is water, and wherein the first noise superposition mode is used for simulating a large wave of water, and the second noise superposition mode is used for simulating a large wave and a small wave of water.
8. An image rendering apparatus, characterized in that the apparatus comprises:
a first determination module for determining a target noise image;
the second determination module is used for determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
the sampling module is used for sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples;
and the rendering module is used for rendering the image according to the noise samples to obtain a rendered target image.
9. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202210652062.5A 2022-06-09 2022-06-09 Image rendering method and device, storage medium and electronic equipment Pending CN115018968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210652062.5A CN115018968A (en) 2022-06-09 2022-06-09 Image rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210652062.5A CN115018968A (en) 2022-06-09 2022-06-09 Image rendering method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115018968A true CN115018968A (en) 2022-09-06

Family

ID=83072528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210652062.5A Pending CN115018968A (en) 2022-06-09 2022-06-09 Image rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115018968A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745915A (en) * 2024-02-07 2024-03-22 西交利物浦大学 Model rendering method, device, equipment and storage medium
CN117745915B (en) * 2024-02-07 2024-05-17 西交利物浦大学 Model rendering method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745915A (en) * 2024-02-07 2024-03-22 西交利物浦大学 Model rendering method, device, equipment and storage medium
CN117745915B (en) * 2024-02-07 2024-05-17 西交利物浦大学 Model rendering method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107680042B (en) Rendering method, device, engine and storage medium combining texture and convolution network
CN111968216A (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
EP3533218B1 (en) Simulating depth of field
US8854392B2 (en) Circular scratch shader
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN112581632B (en) House source data processing method and device
CN114049420A (en) Model training method, image rendering method, device and electronic equipment
CN115170740A (en) Special effect processing method and device, electronic equipment and storage medium
Liu et al. Real-Time Neural Rasterization for Large Scenes
CN116385622B (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
Ruzínoor et al. 3D terrain visualisation for GIS: A comparison of different techniques
CN111508058A (en) Method and device for three-dimensional reconstruction of image, storage medium and electronic equipment
CN115487495A (en) Data rendering method and device
CN115018968A (en) Image rendering method and device, storage medium and electronic equipment
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN115049559A (en) Model training method, human face image processing method, human face model processing device, electronic equipment and readable storage medium
CN116843812A (en) Image rendering method and device and electronic equipment
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
CN111354082B (en) Method and device for generating surface map, electronic equipment and storage medium
EP4111422A1 (en) Visualisation of surface features of a virtual fluid
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
KR20230013099A (en) Geometry-aware augmented reality effects using real-time depth maps
Suppan et al. Neural Screen Space Rendering of Direct Illumination.
CN110148086A (en) The depth polishing method, apparatus and three-dimensional rebuilding method of sparse depth figure, device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination