CN116543094A - Model rendering method, device, computer readable storage medium and electronic equipment - Google Patents

Model rendering method, device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN116543094A
CN116543094A CN202310507003.3A CN202310507003A CN116543094A CN 116543094 A CN116543094 A CN 116543094A CN 202310507003 A CN202310507003 A CN 202310507003A CN 116543094 A CN116543094 A CN 116543094A
Authority
CN
China
Prior art keywords
model
illumination
target virtual
virtual model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310507003.3A
Other languages
Chinese (zh)
Inventor
张旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310507003.3A priority Critical patent/CN116543094A/en
Publication of CN116543094A publication Critical patent/CN116543094A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a model rendering method, a model rendering device, a computer readable storage medium and electronic equipment, and relates to the technical field of computers. The method comprises the following steps: determining a model surrounding area matched with the target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model; determining illumination attenuation data corresponding to a plurality of directions on the surface of a model surrounding area based on illumination vector data of a preset light source in the current illumination direction; and performing illumination rendering on the target virtual model based on the illumination attenuation data. According to the method and the device, the model surrounding area is determined, the illumination rendering is limited to the model surrounding area, unnecessary calculation cost can be reduced to a certain extent, the processing efficiency of the model rendering is improved, the performance requirement on hardware is low, and the method and the device are well applicable to multi-light source application scenes.

Description

Model rendering method, device, computer readable storage medium and electronic equipment
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a model rendering method, a model rendering device, a computer readable storage medium and electronic equipment.
Background
To enhance the realism and ambience of a virtual scene, one or more virtual light sources are typically configured in the virtual scene, and the virtual scene is illumination rendered based on the configured virtual light sources.
In the related art, illumination rendering is generally implemented based on a basic illumination model, such as a von willebrand illumination model (Phong Lighting Model) or a brinn-von willebrand illumination model (Blinn-Phong Lighting Model), and when illumination rendering is performed, illumination intensity and color of each pixel need to be quickly calculated on a GPU (graphics processing unit, a graphics processor) through a shader program, so that illumination rendering processing efficiency is low, and performance requirements on hardware are high.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides a model rendering method, a model rendering device, a computer readable storage medium and electronic equipment, so as to overcome the problems of lower illumination rendering processing efficiency and higher requirement on hardware performance in the related technology at least to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a model rendering method, the method comprising: determining a model surrounding area matched with a target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model; determining illumination attenuation data corresponding to a plurality of directions on the surface of the model surrounding area based on illumination vector data of a preset light source in the current illumination direction; and performing illumination rendering on the target virtual model based on the illumination attenuation data.
According to a second aspect of the present disclosure, there is provided a model rendering apparatus, the apparatus comprising: the surrounding area determining module is used for determining a model surrounding area matched with the target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model; the attenuation determining module is used for determining illumination attenuation data corresponding to a plurality of directions on the surface of the model surrounding area based on illumination vector data of a preset light source in the current illumination direction; and the illumination rendering module is used for performing illumination rendering on the target virtual model based on the illumination attenuation data.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the model rendering methods described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the model rendering methods described above via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
in the model rendering process, a model surrounding area matched with the target virtual model is determined based on a model surrounding box of the target virtual model, and the model surrounding area is an area for wrapping the target virtual model; determining illumination attenuation data corresponding to a plurality of directions on the surface of a model surrounding area based on illumination vector data of a preset light source in the current illumination direction; and performing illumination rendering on the target virtual model based on the illumination attenuation data. By determining the model surrounding area of the target virtual model, the illumination rendering is limited to the model surrounding area, so that unnecessary calculation cost is reduced, calculation complexity is reduced, the processing efficiency of the model rendering is improved to a certain extent, the smoothness of a game is ensured, the requirement on hardware performance is reduced, and the method is suitable for a mobile terminal and a multi-light source application scene.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely some embodiments of the present disclosure and that other drawings may be derived from these drawings without undue effort.
Fig. 1 shows a flowchart of a model rendering method in the present exemplary embodiment;
fig. 2A shows an example diagram of a model bounding box corresponding to a target virtual model in the present exemplary embodiment;
fig. 2B is a diagram showing an example of a spherical surrounding area corresponding to a target virtual model in the present exemplary embodiment;
FIG. 2C illustrates a diagram of an example rendering of a single light source model in the present exemplary embodiment;
FIG. 2D illustrates an example diagram of a multi-illuminant model rendering in the present exemplary embodiment;
FIG. 3 illustrates a flowchart for implementing the real-time update of lighting effects in the present exemplary embodiment;
FIG. 4 illustrates a flowchart of adjusting the lighting details of a target virtual model in the present exemplary embodiment;
fig. 5 is a diagram showing an example of the effect of details of one color channel in the present exemplary embodiment;
FIG. 6 illustrates a flowchart of illumination rendering of a model using a model enclosure in the present exemplary embodiment;
fig. 7 shows a block diagram of a model rendering apparatus in the present exemplary embodiment;
fig. 8 shows an electronic device for implementing the above model rendering method in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Herein, "first," "second," and the like are labels for specific objects, and do not limit the number or order of objects.
In the related art, when performing illumination rendering, the illumination intensity and color of each pixel need to be rapidly calculated on the GPU by using an illumination base model through a shader program, so as to achieve a gradual illumination effect. The process needs to perform a large amount of calculation and rendering, the illumination rendering processing efficiency is low, the performance requirement on hardware is high, and the mobile terminal and the multi-light source application scene cannot be well adapted.
In view of one or more of the above problems, exemplary embodiments of the present disclosure provide a model rendering method, a model rendering device, a computer-readable storage medium, and an electronic device, which may be deployed in an intelligent processing device such as a mobile phone, a tablet, a computer, or a server, and may perform illumination rendering on a plurality of light sources at the same time.
As shown in fig. 1, a flow chart of a model rendering method is provided, which specifically includes the following steps S110 to S130:
step S110, determining a model surrounding area matched with the target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model;
step S120, determining illumination attenuation data corresponding to a plurality of directions on the surface of a model surrounding area based on illumination vector data of a preset light source in the current illumination direction;
and step S130, performing illumination rendering on the target virtual model based on the illumination attenuation data.
In the model rendering process, the illumination rendering is limited to the model surrounding area by determining the model surrounding area of the target virtual model, so that unnecessary calculation cost is reduced, calculation complexity is reduced, the processing efficiency of the model rendering is improved to a certain extent, the smoothness of a game is ensured, the requirement on hardware performance is reduced, and the method is suitable for a mobile terminal and is suitable for multiple light source application scenes.
Each step in fig. 1 is specifically described below.
In step S110, a model bounding region matching the target virtual model is determined based on the model bounding box of the target virtual model, the model bounding region being a region wrapping the target virtual model.
Wherein the target virtual model refers to a virtual model to be rendered by illumination, and the target virtual model can be a three-dimensional virtual model. Taking the virtual scene as an example, the target virtual model may be any type of virtual model located in the virtual scene, such as a virtual model of a person, animal, building, etc.
In an alternative embodiment, before determining the model bounding region matching the target virtual model based on the model bounding box of the target virtual model, the target virtual model may be determined by: obtaining a virtual model to be displayed in a scene display interface; determining the scene type of the virtual model to be displayed according to the relative position information of the virtual model to be displayed in the scene display interface; judging whether the virtual model to be displayed meets preset conditions or not based on the scene type of the virtual model to be displayed; and if the preset condition is met, taking the virtual model to be displayed as a target virtual model.
The virtual model to be displayed may be a three-dimensional virtual model located in a virtual scene. By way of example, scene types may include three types, close-shot, medium-shot, and far-shot. Specifically, if the virtual model to be displayed is located at a front position relative to the scene display interface, determining that the scene type to which the virtual model to be displayed belongs is a close-up; if the virtual model to be displayed is located at a position which is back relative to the scene display interface, determining that the scene type of the virtual model to be displayed is a perspective; if the virtual model to be displayed is located at the middle position relative to the scene display interface, the virtual model to be displayed can be determined to be a middle scene. Wherein, the preset condition may include any one of the following: the scene type of the virtual model to be displayed is close-range; the scene type of the virtual model to be displayed is close-range or medium-range. It should be noted that, the division of scene types is only illustrative, and in the practical application process, other forms of division may be performed according to practical requirements, for example, the scene types are divided into two types of near view and far view.
In the above steps, the determination of the target virtual model is performed based on the scene type, so that model objects to be rendered can be reduced to a certain extent, and illumination rendering is not required to be performed on all model objects in the scene, so that the performance requirement on hardware can be further reduced.
Wherein, the model bounding box refers to a smallest square bounding volume capable of containing the target virtual model, and the model bounding volume can be represented by a central point and three axial lengths. Specifically, the center point of the target virtual model may be used as the center point of the model bounding box, and the length, width and height of the target virtual model may be respectively used as the three axial lengths of the model bounding box, so as to obtain a model bounding box matched with the target virtual model. Illustratively, as shown in FIG. 2A, an example diagram of a model bounding box 202 corresponding to a target virtual model 201 is provided.
Wherein, the model surrounding area refers to a model surrounding area which can wrap the target virtual model. Alternatively, the mold surrounding area may be a closed area with smooth surface and no corners contained in the mold surrounding box. It should be noted that, the size of the model surrounding area may be determined by the length, width and height of the corresponding target virtual model, and the larger the size of the target virtual model is, the larger the model surrounding area of the target virtual model is; the smaller the target virtual model size, the smaller the model envelope of the target virtual model. Because the surface of the model surrounding area is smoother than the surface of the target virtual model, the processing is more convenient, the illumination rendering is carried out based on the model surrounding area to assist the target virtual model, and the processing efficiency of the model rendering can be improved to a certain extent.
And determining a model bounding region based on the model bounding box as a reference. Compared with a target virtual model, the model bounding box is simple in structure, and the process of determining the model bounding region can be simplified to a certain extent, so that the processing efficiency is improved.
In an alternative embodiment, the determining the model bounding region matched with the target virtual model in step S110 based on the model bounding box of the target virtual model may specifically be implemented by the following steps: based on the axial amount of the model bounding box in the three-dimensional space, a model bounding region of the target virtual model is determined.
Wherein the axial amount refers to a vector parallel to the coordinate axes.
Alternatively, if the spatial coordinate system is constructed with the center point of the model bounding box as the origin, the axial vector of the model bounding box in the three-dimensional space may be a vector parallel to the coordinate axis, with the origin as the origin, and the end point at the boundary of the model bounding box. Optionally, when determining the model bounding region of the target virtual model based on the axial quantity of the model bounding box in the three-dimensional space, the axial quantity of the model bounding box in the three-dimensional space can be used as a key vector set for limiting the range of the model bounding region, and then the model bounding region is obtained by fitting.
Alternatively, if the world space coordinate system is taken as a reference coordinate system, the axial vector of the model bounding box in the three-dimensional space may be a vector parallel to the coordinate axis, with the start point as the origin point, and the end point reaching the plane where the surface of the model bounding box is located. Optionally, when determining the model bounding region of the target virtual model based on the axial amount of the model bounding box in the three-dimensional space, three auxiliary vectors parallel to the coordinate axes may be determined based on the three-dimensional coordinate values corresponding to the center point of the model bounding box, where the three auxiliary vectors ai+0j+0k, 0i+bj+0k, 0i+0j+ck, where i, j, k represent three base vectors parallel to the coordinate axes, respectively; the respective axial amounts of the model bounding box in three-dimensional space can then be subtracted by the auxiliary vectors in the same direction as the axial amounts, respectively, to obtain a set of key vectors defining the range of the model bounding region, so as to fit the model bounding region based on the set of key vectors.
Alternatively, the axial amount of the model bounding box in world space can also be obtained by configuring the following code:
GetPrimitiveData(Parameters.PrimitiveId).LocalObjectBoundsMax.xyz。
it should be noted that, the code is a code designed based on the c++ programming language, and other programming languages may be adopted in the practical application process, which is not limited herein.
Alternatively, the model enclosure may be a spherical enclosure centered on the center of the model enclosure. Because the surface of the spherical surrounding area is relatively smoother, the spherical surrounding area is adopted for assisting in illumination rendering, so that the illumination effect transition is smoother and more natural, and unnecessary calculation cost can be avoided. In addition, the model surrounding area can be an ellipsoidal surrounding area, a square surrounding area, a regular polygon surrounding area and other shaped areas. As shown in fig. 2B, an example diagram of an ellipsoidal surrounding area 203 corresponding to a target virtual model 201 is provided.
After determining the model surrounding area that matches the target virtual model, step S120 may continue to be performed.
In step S120, illumination attenuation data corresponding to a plurality of orientations on the surface of the model enclosure is determined based on illumination vector data of the preset light source in the current illumination direction.
The space vector data corresponding to the plurality of directions on the surface of the model surrounding area can be vector data corresponding to space vectors formed from the sphere center of the model surrounding area to each point on the surface of the model surrounding area. The preset light source may be a simulated light source preconfigured in the virtual scene, such as a virtual stage light, a virtual street light, or the like. The illumination vector data may be illumination vector parameters of a preset light source in a current illumination direction, and a direction in which a position of the preset light source points to a current position of the target virtual model may be used as the current illumination direction.
Specifically, illumination attenuation mapping can be performed according to space vector data corresponding to a plurality of directions on the surface of the model surrounding area and illumination vector data of a preset light source in the current illumination direction, so as to obtain illumination attenuation data corresponding to the plurality of directions on the surface of the model surrounding area.
In an alternative embodiment, the determining, in step S120, the illumination attenuation data corresponding to a plurality of orientations on the surface of the model surrounding area based on the illumination vector data of the preset light source in the current illumination direction may be implemented by the following steps: carrying out dot product operation on space vector data corresponding to a plurality of directions on the surface of the model enclosing region and illumination vector data of a preset light source in the current illumination direction to obtain dot product operation results corresponding to the plurality of directions on the surface of the model enclosing region; and carrying out illumination attenuation mapping on dot product operation results corresponding to a plurality of orientations on the surface of the model enclosing region to obtain illumination attenuation data corresponding to the plurality of orientations on the surface of the model enclosing region.
Specifically, the space vector data corresponding to a plurality of directions on the surface of the model enclosing region and the illumination vector data of the preset light source in the current illumination direction are subjected to dot product operation, after dot product operation results are obtained, illumination attenuation mapping can be carried out on the dot product operation results corresponding to the plurality of directions on the surface of the model enclosing region based on a preset mapping relation. Alternatively, the mapping relationship may be a mapping relationship in which the dot product operation result and the illumination attenuation data show positive correlation, that is, the greater the dot product operation result is, the stronger the illumination attenuation is, and the smaller the dot product operation result is, the weaker the illumination attenuation is, and in the practical application process, the configuration of the mapping relationship may be performed based on a specific mapping effect, which is not limited herein specifically.
In the above steps, through illumination attenuation mapping, different attenuation intensities corresponding to different directions of the model surrounding area can be achieved, illumination attenuation of the light-facing surface is small, illumination attenuation of the backlight surface is large, gradual change of illumination is achieved, and visual sense reality of illumination rendering is further improved.
In an optional embodiment, the determining, in step S120, illumination attenuation data corresponding to a plurality of orientations on the surface of the model surrounding area based on the illumination vector data of the preset light source in the current illumination direction may further be implemented by the following steps: determining illumination attenuation data corresponding to a preset azimuth on the surface of a model surrounding area based on illumination vector data of a preset light source in the current illumination direction; according to the illumination attenuation data corresponding to the adjacent preset azimuth, the illumination attenuation data corresponding to the non-preset azimuth between the adjacent preset azimuth is determined through interpolation.
The preset orientations are different orientations of the sphere center of the surrounding area of the model, and a plurality of preset orientations, such as upper, lower, front, rear, left, right and the like, can be preset according to appearance characteristics of the model. It should be noted that, the specific directions of the preset orientations and the number of preset orientations set in the examples are only illustrative, and may be set according to the needs in the actual application process, which is not limited in detail herein. Optionally, dot product operation can be performed on space vector data corresponding to a preset azimuth on the surface of the model surrounding area and illumination vector data of the preset light source in the current illumination direction, so as to obtain a dot product operation result corresponding to the preset azimuth on the surface of the model surrounding area.
Wherein, two preset orientations without other preset orientations between the minimum included angles of the orientations can be regarded as adjacent preset orientations. Taking the preset azimuth as an upper position, a lower position, a front position, a rear position, a left position and a right position as examples, wherein the front position, the left position and the upper position are adjacent preset azimuth; the front position, the upper position and the right position are adjacent preset positions; the front position, the right position and the lower position are adjacent preset positions; the front position, the lower position and the left position are adjacent preset positions; the rear position, the left position and the upper position are adjacent preset positions; the rear position, the upper position and the right position are adjacent preset positions; the rear position, the right position and the lower position are adjacent preset positions; the rear position, the lower position and the left position are adjacent preset positions.
Wherein, the non-preset azimuth refers to other azimuth than the preset azimuth on the surface of the model surrounding area. Alternatively, the non-preset orientations may be orientations on the surface of the model enclosure that lie between adjacent preset orientations.
Specifically, after the illumination attenuation data corresponding to the preset azimuth on the surface of the model surrounding area are determined, the illumination attenuation data interpolation can be directly carried out between the adjacent preset azimuth to obtain the illumination attenuation data corresponding to the non-preset azimuth between the adjacent preset azimuth, so that the illumination attenuation data corresponding to a plurality of azimuth on the surface of the model surrounding area are determined, and the processing efficiency is further improved.
After determining the illumination decay data, step S130 may continue to be performed.
In step S130, the target virtual model is subjected to illumination rendering based on the illumination attenuation data.
The illumination mapping is carried out through the illumination attenuation data, so that the illumination attenuation of the light-facing surface is smaller, the brightness is higher, and the attenuation of the backlight surface is larger, and the brightness is lower. Exemplary, as shown in FIG. 2C, a single light source model rendering example diagram is provided.
Specifically, when the illumination rendering is performed, the illumination of the target virtual model can be rendered according to the illumination attenuation data parameter and some other preset parameters of the preset light source, such as illumination intensity, illumination color and the like.
In the practical application process, the steps in fig. 1 may also be repeated to superimpose the illumination effects of the multiple light sources on the target virtual model. Illustratively, as shown in FIG. 2D, an example diagram of a multiple light source model rendering is provided.
In an alternative embodiment, the steps shown in fig. 3 may also be performed to achieve real-time updating of the lighting effect:
step S310, adjusting illumination parameters of a preset light source;
step S320, updating illumination attenuation data based on illumination parameters adjusted by a preset light source;
step S330, updating the illumination effect of the target virtual model based on the updated illumination attenuation data.
The preset illumination parameters of the light source may include illumination direction parameters. For example, the illumination direction of the preset light source may be adjusted.
After the illumination parameters of the preset light source are adjusted, the illumination attenuation data can be updated based on the illumination parameters adjusted by the preset light source. Taking the adjustment of the illumination direction of the preset light source as an example, specifically, the space vector data corresponding to a plurality of directions on the surface of the model enclosing region and the illumination vector data corresponding to the preset light source in the adjusted illumination direction can be subjected to dot product operation, so as to obtain a new dot product operation result corresponding to the plurality of directions on the surface of the model enclosing region, and the new dot product operation result corresponding to the plurality of directions on the surface of the model enclosing region is subjected to illumination attenuation mapping, so that new illumination attenuation data corresponding to the plurality of directions on the surface of the model enclosing region is obtained, and updating of the illumination attenuation data is realized.
Specifically, after the illumination attenuation data is updated, illumination rendering can be performed on the target virtual model based on the updated illumination attenuation data to obtain a new illumination effect of the target virtual model, so that the illumination effect of the target virtual model is updated in real time, and the diversity of the illumination effect of the target virtual model is improved to a certain extent.
In addition, the preset illumination parameters of the light source can also comprise configuration parameters such as illumination intensity, illumination color and the like. For example, parameters such as an illumination direction, illumination intensity, illumination color, light source position and the like of the preset light source can be adjusted to adaptively change the illumination effect of the target virtual model.
In an alternative embodiment, after the illumination rendering of the target virtual model based on the illumination attenuation data, steps as shown in fig. 4 may also be performed to implement adjustment of illumination details of the target virtual model:
step S410, gray data of the vertex normals of the target virtual model in each color channel is obtained;
step S420, based on the gray data of the vertex normals of the target virtual model in each color channel, performing color adjustment on the target virtual model to obtain the illumination detail adjustment effect of the target virtual model.
Wherein, the color channels may include three color channels of red, green and blue, the gray data may be used to represent the brightness of the corresponding color channel, and the gray data of the three color channels may be combined to represent one color.
In the step shown in fig. 4, indirect light simulation is performed through gray data of the vertex normal of the target virtual model in each color channel, so that detail adjustment of the model illumination effect can be realized, and illumination rendering of the target virtual model is more real and natural.
In an alternative embodiment, the step S420 of performing color adjustment on the target virtual model based on the gray data of the vertex normal of the target virtual model in each color channel to obtain the illumination detail adjustment effect of the target virtual model may be implemented by the following steps: determining color data to be superimposed based on preset channel weight parameters and gray data of vertex normals of the target virtual model in each color channel; and performing color superposition on the target virtual model based on the color data to be superimposed to obtain the illumination detail adjustment effect of the target virtual model.
The color data to be superimposed refers to color data formed by combining gray data in each color channel under the influence of preset channel weight parameters, and can be used for being superimposed on a target virtual model to perform indirect light simulation.
The preset channel weight can be used for controlling the detail adjustment amplitude of the channels with different colors, so that the controllability of illumination details can be improved to a certain extent. In the actual application process, the weight configuration may be performed according to a specific presentation effect, which is not limited herein.
Illustratively, as shown in FIG. 5, an example graph of color channel detail effects is provided. Wherein, the three parameters N.r, N.g, N.b respectively represent preset channel weights corresponding to the three color channels of red, green, and blue, and the model 501, the model 502, and the model 503 respectively represent exemplary detailed effect models corresponding to the three color channels of red, green, and blue.
In an optional embodiment, the color overlapping is performed on the target virtual model based on the color data to be overlapped, so as to obtain the illumination detail adjustment effect of the target virtual model, which can be specifically realized through the following steps: determining transition color data based on the color data to be superimposed; based on the transition color data, performing color adjustment on the target virtual model to obtain an intermediate adjustment effect of the target virtual model; and performing color adjustment on the target virtual model on the basis of the intermediate adjustment effect based on the color data to be superimposed, so as to obtain the final adjustment effect of the target virtual model.
The transition color data refers to transition color data adopted by the target virtual model in overlaying color data to be overlaid. For example, the transition color data may be determined by a linear interpolation algorithm.
The intermediate adjustment effect of the target virtual model refers to an illumination rendering effect obtained by superposing colors corresponding to transition color data on the target virtual character.
In the above steps, after the target virtual model is adjusted based on the transition color data, the target virtual model is adjusted based on the color data to be superimposed, so that a transition stage exists in the adjustment of the model details, and the performance effect is more natural and smooth.
Optionally, if the target virtual model is a close-range, the model surrounding area matched with the target virtual model may be used to perform illumination rendering on the target model, and the gray data of the vertex normal of the target virtual model in each color channel may be used to perform illumination detail adjustment on the target virtual model; if the target virtual model is a middle view or a long view, the model surrounding area matched with the target virtual model can be utilized to conduct illumination rendering on the target model, illumination detail adjustment is not conducted, unnecessary calculation cost is further reduced, and model rendering processing efficiency is improved.
Optionally, in the multi-light source scene, if the target virtual model is a close-range, a first number of light source effects may be superimposed on the target virtual model; if the target virtual model is a middle scene, a second number of light source effects can be overlapped for the target virtual model; if the target virtual model is a perspective, a third number of light source effects can be superimposed on the target virtual model, wherein the first number is greater than the second number, and the second number is greater than the third number, so that unnecessary calculation overhead is further reduced, and model rendering processing efficiency is improved.
As shown in fig. 6, a flowchart for illumination rendering of a model using a model enclosure is provided, which may specifically include the following steps:
Step S601, determining a model enclosing region of the target virtual model based on the axial quantity of the model enclosing box of the target virtual model in the three-dimensional space, wherein the model enclosing region comprises a spherical enclosing region taking the center of the model enclosing box as the center of sphere;
step S602, performing dot product operation on space vector data corresponding to the preset azimuth of the surface of the model surrounding area and illumination vector data of the preset light source in the current illumination direction to obtain dot product operation results corresponding to the preset azimuth of the surface of the model surrounding area;
step S603, performing illumination attenuation mapping on the dot product operation result corresponding to the preset azimuth of the surface of the model surrounding area to obtain illumination attenuation data corresponding to the preset azimuth of the surface of the model surrounding area, and determining illumination attenuation data corresponding to non-preset azimuth between adjacent preset azimuth through interpolation according to the illumination attenuation data corresponding to the adjacent preset azimuth;
step S604, performing illumination rendering on the target virtual model based on the illumination attenuation data;
step S605, gray data of the vertex normals of the target virtual model in each color channel is obtained;
step S606, determining color data to be superimposed based on preset channel weight parameters and gray data of the vertex normals of the target virtual model in each color channel;
Step S607, determining transition color data based on the color data to be superimposed;
step S608, performing color adjustment on the target virtual model based on the transition color data to obtain an intermediate adjustment effect of the target virtual model;
step S609, based on the color data to be superimposed, performing color adjustment on the target virtual model on the basis of the intermediate adjustment effect, and obtaining the final adjustment effect of the target virtual model.
Fig. 7 illustrates a model rendering apparatus 700 in an exemplary embodiment of the present disclosure. As shown in fig. 7, the model rendering apparatus 700 may include:
the bounding region determining module 710 is configured to determine, based on a model bounding box of the target virtual model, a model bounding region that matches the target virtual model, where the model bounding region is a region wrapping the target virtual model;
the attenuation determining module 720 is configured to determine illumination attenuation data corresponding to a plurality of orientations on the surface of the model surrounding area based on illumination vector data of the preset light source in the current illumination direction;
and the illumination rendering module 730 is configured to render illumination to the target virtual model based on the illumination attenuation data.
In an alternative embodiment, based on the foregoing, the bounding region determination module 710 may be configured to: based on the axial amount of the model bounding box in the three-dimensional space, a model bounding region of the target virtual model is determined.
In an alternative embodiment, based on the foregoing, the model enclosure includes a spherical enclosure centered on the center of the model enclosure.
In an alternative embodiment, based on the foregoing scheme, the attenuation determining module 720 may include: the click operation module is used for carrying out dot product operation on the space vector data corresponding to a plurality of directions on the surface of the model enclosing region and the illumination vector data of the preset light source in the current illumination direction to obtain dot product operation results corresponding to the plurality of directions on the surface of the model enclosing region; and the attenuation mapping module is used for carrying out illumination attenuation mapping on dot product operation results corresponding to a plurality of orientations on the surface of the model surrounding area to obtain illumination attenuation data corresponding to the plurality of orientations on the surface of the model surrounding area.
In an alternative embodiment, based on the foregoing scheme, the attenuation determining module 720 may include: the preset azimuth attenuation determining module is used for determining illumination attenuation data corresponding to a preset azimuth on the surface of the model surrounding area based on illumination vector data of the preset light source in the current illumination direction; the attenuation data interpolation module is used for determining the illumination attenuation data corresponding to the non-preset azimuth between the adjacent preset azimuth through interpolation according to the illumination attenuation data corresponding to the adjacent preset azimuth.
In an alternative embodiment, based on the foregoing solution, the model rendering apparatus 700 may further include: the parameter adjusting module is used for adjusting illumination parameters of a preset light source; the attenuation updating module is used for updating illumination attenuation data based on illumination parameters adjusted by a preset light source; and the illumination effect updating module is used for updating the illumination effect of the target virtual model based on the updated illumination attenuation data.
In an alternative embodiment, based on the foregoing scheme, after performing illumination rendering on the target virtual model based on the illumination attenuation data, the model rendering apparatus 700 further includes: the gray data acquisition module is used for acquiring gray data of the vertex normals of the target virtual model in each color channel; the detail adjusting module is used for carrying out color adjustment on the target virtual model based on the gray data of the vertex normal of the target virtual model in each color channel, and obtaining the illumination detail adjusting effect of the target virtual model.
In an alternative embodiment, based on the foregoing solution, the detail adjustment module may include: the color data determining module is used for determining color data to be superimposed based on preset channel weight parameters and gray data of the vertex normals of the target virtual model in each color channel; and the color superposition module is used for carrying out color superposition on the target virtual model based on the color data to be superposed to obtain the illumination detail adjustment effect of the target virtual model.
In an alternative embodiment, based on the foregoing scheme, the color superimposing module may be configured to: determining transition color data based on the color data to be superimposed; based on the transition color data, performing color adjustment on the target virtual model to obtain an intermediate adjustment effect of the target virtual model; and performing color adjustment on the target virtual model on the basis of the intermediate adjustment effect based on the color data to be superimposed, so as to obtain the final adjustment effect of the target virtual model.
The specific details of each module in the model rendering device 700 are described in detail in the method section, and the details that are not disclosed may refer to the embodiment of the method section, so that they will not be described in detail.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the model rendering method described above in the present specification. In some possible implementations, aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on an electronic device.
The program product may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The exemplary embodiment of the disclosure also provides an electronic device capable of implementing the model rendering method. An electronic device 800 according to such an exemplary embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 may be embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: at least one processing unit 810, at least one memory unit 820, a bus 830 connecting the different system components (including memory unit 820 and processing unit 810), and a display unit 840.
The storage unit 820 stores program code that can be executed by the processing unit 810, so that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" of the present specification.
In particular, a program product stored on a computer readable storage medium may cause an electronic device to perform the steps of:
determining a model surrounding area matched with the target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model;
Determining illumination attenuation data corresponding to a plurality of directions on the surface of a model surrounding area based on illumination vector data of a preset light source in the current illumination direction;
and performing illumination rendering on the target virtual model based on the illumination attenuation data.
In an alternative embodiment, based on the foregoing solution, the determining the model bounding region matching the target virtual model based on the model bounding box of the target virtual model may specifically be implemented by the following steps: based on the axial amount of the model bounding box in the three-dimensional space, a model bounding region of the target virtual model is determined.
In an alternative embodiment, based on the foregoing, the model enclosure includes a spherical enclosure centered on a center of the model enclosure.
In an optional embodiment, based on the foregoing solution, the determining, based on the illumination vector data of the preset light source in the current illumination direction, illumination attenuation data corresponding to a plurality of orientations on the surface of the model enclosure may be specifically implemented by the following steps: carrying out dot product operation on space vector data corresponding to a plurality of directions on the surface of the model enclosing region and illumination vector data of a preset light source in the current illumination direction to obtain dot product operation results corresponding to the plurality of directions on the surface of the model enclosing region; and carrying out illumination attenuation mapping on dot product operation results corresponding to a plurality of orientations on the surface of the model enclosing region to obtain illumination attenuation data corresponding to the plurality of orientations on the surface of the model enclosing region.
In an optional embodiment, based on the foregoing solution, the determining, based on the illumination vector data of the preset light source in the current illumination direction, illumination attenuation data corresponding to a plurality of orientations on the surface of the model enclosure may be specifically implemented by the following steps: determining illumination attenuation data corresponding to a preset azimuth on the surface of a model surrounding area based on illumination vector data of a preset light source in the current illumination direction; according to the illumination attenuation data corresponding to the adjacent preset azimuth, the illumination attenuation data corresponding to the non-preset azimuth between the adjacent preset azimuth is determined through interpolation.
In an alternative embodiment, based on the foregoing, the following steps may also be performed: adjusting illumination parameters of a preset light source; updating illumination attenuation data based on illumination parameters adjusted by a preset light source; and updating the illumination effect of the target virtual model based on the updated illumination attenuation data.
In an alternative embodiment, based on the foregoing, after the illumination rendering of the target virtual model based on the illumination attenuation data, the following steps may be further performed: acquiring gray data of a vertex normal of a target virtual model in each color channel; and carrying out color adjustment on the target virtual model based on the gray data of the vertex normals of the target virtual model in each color channel, so as to obtain the illumination detail adjustment effect of the target virtual model.
In an optional implementation manner, based on the foregoing scheme, the color adjustment is performed on the target virtual model based on the gray data of the vertex normal of the target virtual model in each color channel, so as to obtain the illumination detail adjustment effect of the target virtual model, which may be specifically implemented by the following steps: determining color data to be superimposed based on preset channel weight parameters and gray data of vertex normals of the target virtual model in each color channel; and performing color superposition on the target virtual model based on the color data to be superimposed to obtain the illumination detail adjustment effect of the target virtual model.
In an optional embodiment, based on the foregoing solution, the color overlaying is performed on the target virtual model based on the color data to be overlaid, so as to obtain an illumination detail adjustment effect of the target virtual model, which may be specifically implemented by the following steps: determining transition color data based on the color data to be superimposed; based on the transition color data, performing color adjustment on the target virtual model to obtain an intermediate adjustment effect of the target virtual model; and performing color adjustment on the target virtual model on the basis of the intermediate adjustment effect based on the color data to be superimposed, so as to obtain the final adjustment effect of the target virtual model.
In the model rendering process, the illumination rendering is limited to the model surrounding area by determining the model surrounding area of the target virtual model, so that unnecessary calculation cost is reduced, calculation complexity is reduced, the processing efficiency of the model rendering is improved to a certain extent, the smoothness of a game is ensured, the requirement on hardware performance is reduced, and the method is suitable for a mobile terminal and is suitable for multiple light source application scenes.
Storage unit 820 may include readable media in the form of volatile storage units such as Random Access Memory (RAM) 821 and/or cache memory unit 822, and may further include Read Only Memory (ROM) 823.
The storage unit 820 may also include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown in fig. 8, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown in fig. 8, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID (Redundant Arrays of Independent Disks, redundant array of independent disks) systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of model rendering, the method comprising:
determining a model surrounding area matched with a target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model;
determining illumination attenuation data corresponding to a plurality of directions on the surface of the model surrounding area based on illumination vector data of a preset light source in the current illumination direction;
and performing illumination rendering on the target virtual model based on the illumination attenuation data.
2. The method of claim 1, wherein the determining a model bounding region that matches the target virtual model based on the model bounding box of the target virtual model comprises:
and determining a model surrounding area of the target virtual model based on the axial quantity of the model surrounding box in the three-dimensional space.
3. The method of claim 1, wherein the model bounding region comprises a spherical bounding region centered about a center of the model bounding box.
4. The method of claim 1, wherein determining illumination attenuation data corresponding to a plurality of orientations on the surface of the model enclosure based on illumination vector data of a preset light source in a current illumination direction comprises:
performing dot product operation on the space vector data corresponding to a plurality of orientations on the surface of the model enclosing region and the illumination vector data of the preset light source in the current illumination direction to obtain dot product operation results corresponding to the plurality of orientations on the surface of the model enclosing region;
and carrying out illumination attenuation mapping on dot product operation results corresponding to a plurality of orientations on the surface of the model enclosing region to obtain illumination attenuation data corresponding to the plurality of orientations on the surface of the model enclosing region.
5. The method of claim 1, wherein determining illumination attenuation data corresponding to a plurality of orientations on the surface of the model enclosure based on illumination vector data of a preset light source in a current illumination direction comprises:
determining illumination attenuation data corresponding to a preset azimuth on the surface of the model surrounding area based on illumination vector data of the preset light source in the current illumination direction;
according to the illumination attenuation data corresponding to the adjacent preset azimuth, determining the illumination attenuation data corresponding to the non-preset azimuth between the adjacent preset azimuth through interpolation.
6. The method according to claim 1, wherein the method further comprises:
adjusting the illumination parameters of the preset light source;
updating the illumination attenuation data based on the illumination parameters adjusted by the preset light source;
and updating the illumination effect of the target virtual model based on the updated illumination attenuation data.
7. The method of claim 1, wherein after rendering the target virtual model with light based on the light attenuation data, the method further comprises:
acquiring gray data of the vertex normals of the target virtual model in each color channel;
and performing color adjustment on the target virtual model based on gray data of the vertex normal of the target virtual model in each color channel to obtain an illumination detail adjustment effect of the target virtual model.
8. The method according to claim 7, wherein performing color adjustment on the target virtual model based on the gray data of the vertex normal of the target virtual model in each color channel to obtain the illumination detail adjustment effect of the target virtual model includes:
determining color data to be superimposed based on preset channel weight parameters and gray data of vertex normals of the target virtual model in each color channel;
And carrying out color superposition on the target virtual model based on the color data to be superimposed to obtain an illumination detail adjusting effect of the target virtual model.
9. The method according to claim 8, wherein the performing color superimposition on the target virtual model based on the color data to be superimposed to obtain the illumination detail adjustment effect of the target virtual model includes:
determining transition color data based on the color data to be superimposed;
based on the transition color data, performing color adjustment on the target virtual model to obtain an intermediate adjustment effect of the target virtual model;
and performing color adjustment on the target virtual model on the basis of the intermediate adjustment effect based on the color data to be superimposed, so as to obtain a final adjustment effect of the target virtual model.
10. A model rendering apparatus, the apparatus comprising:
the surrounding area determining module is used for determining a model surrounding area matched with the target virtual model based on a model surrounding box of the target virtual model, wherein the model surrounding area is an area for wrapping the target virtual model;
The attenuation determining module is used for determining illumination attenuation data corresponding to a plurality of directions on the surface of the model surrounding area based on illumination vector data of a preset light source in the current illumination direction;
and the illumination rendering module is used for performing illumination rendering on the target virtual model based on the illumination attenuation data.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 9 via execution of the executable instructions.
CN202310507003.3A 2023-05-04 2023-05-04 Model rendering method, device, computer readable storage medium and electronic equipment Pending CN116543094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310507003.3A CN116543094A (en) 2023-05-04 2023-05-04 Model rendering method, device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310507003.3A CN116543094A (en) 2023-05-04 2023-05-04 Model rendering method, device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116543094A true CN116543094A (en) 2023-08-04

Family

ID=87453780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310507003.3A Pending CN116543094A (en) 2023-05-04 2023-05-04 Model rendering method, device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116543094A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710557A (en) * 2024-02-05 2024-03-15 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710557A (en) * 2024-02-05 2024-03-15 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud
CN117710557B (en) * 2024-02-05 2024-05-03 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud

Similar Documents

Publication Publication Date Title
CN107358643B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
US11024077B2 (en) Global illumination calculation method and apparatus
EP2705501B1 (en) Texturing in graphics hardware
US6700586B1 (en) Low cost graphics with stitching processing hardware support for skeletal animation
US7176919B2 (en) Recirculating shade tree blender for a graphics system
CN109903366A (en) The rendering method and device of dummy model, storage medium and electronic equipment
CN108701366A (en) The start node of tree traversal for the shadow ray in graphics process determines
US7064755B2 (en) System and method for implementing shadows using pre-computed textures
CN112891946B (en) Game scene generation method and device, readable storage medium and electronic equipment
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
CN113144611B (en) Scene rendering method and device, computer storage medium and electronic equipment
CN116543094A (en) Model rendering method, device, computer readable storage medium and electronic equipment
US8004515B1 (en) Stereoscopic vertex shader override
CN109448123B (en) Model control method and device, storage medium and electronic equipment
CN112494941B (en) Virtual object display control method and device, storage medium and electronic equipment
CN113648652A (en) Object rendering method and device, storage medium and electronic equipment
US7116333B1 (en) Data retrieval method and system
US20050251374A1 (en) Method and system for determining illumination of models using an ambient cube
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
US7710419B2 (en) Program, information storage medium, and image generation system
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
US7724255B2 (en) Program, information storage medium, and image generation system
CN116310022A (en) Flame special effect manufacturing method and device, storage medium and electronic equipment
CN113436343B (en) Picture generation method and device for virtual concert hall, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination