CN111260766B - Virtual light source processing method, device, medium and electronic equipment - Google Patents

Virtual light source processing method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111260766B
CN111260766B CN202010055325.5A CN202010055325A CN111260766B CN 111260766 B CN111260766 B CN 111260766B CN 202010055325 A CN202010055325 A CN 202010055325A CN 111260766 B CN111260766 B CN 111260766B
Authority
CN
China
Prior art keywords
light source
divided
blocks
illumination
light sources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010055325.5A
Other languages
Chinese (zh)
Other versions
CN111260766A (en
Inventor
李籽良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010055325.5A priority Critical patent/CN111260766B/en
Publication of CN111260766A publication Critical patent/CN111260766A/en
Application granted granted Critical
Publication of CN111260766B publication Critical patent/CN111260766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a virtual light source processing method, a virtual light source processing apparatus, a computer readable medium, and an electronic device; relates to the technical field of image processing. The method comprises the following steps: dividing a virtual scene to obtain a plurality of division blocks of the virtual scene; acquiring a plurality of light sources in a current visual field range, and determining texture information of each divided block according to light source information of each light source; and determining the illumination effect of each divided block by using the texture information of each divided block. The virtual light source processing method can overcome the problem of high pressure on the GPU caused by the rendering of the dynamic light source to a certain extent, and further reduces the resource consumption in the rendering process.

Description

Virtual light source processing method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to a virtual light source processing method, a virtual light source processing apparatus, a computer readable medium, and an electronic device.
Background
In order to make the light source effect in the scene more realistic, bringing the user with a more realistic visual perception, computer applications begin to use dynamic light sources. Some high-end applications require processing hundreds of dynamic light sources per frame, and the processing efficiency of the dynamic light sources is very high. Rendering requirements for dynamic light sources can be achieved through a rendering pipeline on a host or a higher-configured device, but the rendering pipeline algorithm is very complex, which is unacceptable for a mobile platform or a lower-configured device.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a virtual light source processing method, a virtual light source processing device, a computer readable medium and electronic equipment, which can overcome the problem that a mobile platform cannot be compatible with a higher complexity of light source processing to a certain extent, thereby improving the light source processing efficiency in a scene.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a virtual light source processing method, including:
dividing a virtual scene to obtain a plurality of division blocks of the virtual scene;
acquiring a plurality of light sources in a current visual field range, and determining texture information of each divided block according to light source information of each light source;
and determining the illumination effect of each divided block by using the texture information of each divided block.
In an exemplary embodiment of the present disclosure, the determining texture information of each of the divided blocks according to light source information of each of the light sources includes:
Acquiring the illumination range of each light source from each light source information;
and determining the light source corresponding to each divided block from the plurality of light sources according to the illumination range so as to determine the texture information of the divided block according to the light source corresponding to the divided block.
In an exemplary embodiment of the disclosure, the determining, according to the illumination range, the light source corresponding to each of the divided blocks includes:
determining bounding boxes of the divided blocks contained in the current field of view;
for each light source, calculating an intersection of the bounding box with an illumination range of the light source;
and if the divided block is included in the intersection, the divided block is corresponding to the light source.
In an exemplary embodiment of the present disclosure, before the mapping the partition block to the light source, the method further includes:
determining whether the number of the corresponding light sources contained in the texture information of the divided blocks is a preset value;
and if the number of the corresponding light sources is a preset value, selecting the light sources to be discarded from the corresponding light sources so as to delete the light sources to be discarded.
In an exemplary embodiment of the disclosure, the selecting a light source to be discarded from the corresponding light sources includes:
Acquiring the light source position and intensity of each corresponding light source through the light source information of each corresponding light source;
calculating a contribution value of each corresponding light source by utilizing the light source position and the intensity;
and selecting a light source to be discarded from the corresponding light sources according to the contribution value.
In an exemplary embodiment of the present disclosure, the dividing the scene space to obtain a plurality of divided blocks of the scene space includes:
and dividing the space data in the horizontal direction of the virtual scene to obtain the plurality of divided blocks.
In an exemplary embodiment of the present disclosure, the method further comprises:
in response to a change in the user field of view, a plurality of light sources within the current field of view are updated.
In an exemplary embodiment of the disclosure, the determining the lighting effect of each of the divided blocks using the texture information of each of the divided blocks includes:
obtaining target light sources corresponding to all the dividing blocks from the texture information;
acquiring the light source type of the target light source from the light source information of the target light source;
and determining an illumination attenuation algorithm through the light source type so as to calculate the illumination effect of the target light source on each divided block.
In an exemplary embodiment of the disclosure, the determining the lighting effect of each of the divided blocks using the texture information of each of the divided blocks includes:
calculating the color value of each divided block by combining the illumination attenuation algorithm and the light source information;
rendering the divided blocks based on the color values to obtain illumination effects of the divided blocks.
According to a second aspect of the present disclosure, there is provided a virtual light source processing apparatus, including a scene division module, a light source information determination module, and an illumination determination module, wherein:
the scene dividing module is used for dividing the virtual scene to obtain a plurality of dividing blocks of the virtual scene;
the light source information determining module is used for obtaining a plurality of light sources in the current visual field range and determining texture information of each divided block according to the light source information of each light source;
and the illumination determining module is used for determining the illumination effect of each divided block by utilizing the texture information of each divided block.
In one exemplary embodiment of the present disclosure, the light source information determining module may include an illumination range determining unit and a texture determining unit, wherein:
And the illumination range determining unit is used for acquiring the illumination range of each light source from each light source information.
And the texture determining unit is used for determining the light source corresponding to each divided block from the plurality of light sources according to the illumination range so as to determine the texture information of the divided block according to the light source corresponding to the divided block.
In one exemplary embodiment of the present disclosure, the texture determining unit may include a field of view determining unit, an intersection calculating unit, and a correspondence determining unit, wherein:
and the visual field range determining unit is used for determining a bounding box of the dividing block contained in the current visual field range.
And the intersection calculating unit is used for calculating the intersection of the bounding box and the illumination range of the light source for each light source.
And the corresponding relation determining unit is used for corresponding the dividing block with the light source if the dividing block is contained in the intersection.
In an exemplary embodiment of the present disclosure, the apparatus further includes a light source number determining module and a light source rejecting module, wherein:
the light source number determining module is used for determining whether the number of the corresponding light sources contained in the texture information of the divided blocks is a preset value or not.
And the light source eliminating module is used for selecting the light sources to be discarded from the corresponding light sources if the number of the corresponding light sources is a preset value so as to delete the light sources to be discarded.
In one exemplary embodiment of the present disclosure, the light source rejection module may include a light source intensity acquisition unit, a contribution value determination unit, and a light source selection unit, wherein:
the light source intensity obtaining unit is used for obtaining the light source position and intensity of each corresponding light source through the light source information of each corresponding light source.
And the contribution value determining unit is used for calculating the contribution value of each corresponding light source by using the light source position and the intensity.
And the light source selection unit is used for selecting the light source to be discarded from the corresponding light sources according to the contribution value.
In one exemplary embodiment of the present disclosure, the scene division module may be specifically configured to: and dividing the space data in the horizontal direction of the virtual scene to obtain the plurality of divided blocks.
In an exemplary embodiment of the present disclosure, the apparatus further comprises a light source updating module for updating a plurality of light sources within the current field of view in response to a change in the field of view of the user.
In one exemplary embodiment of the present disclosure, the illumination determination module may include a light source index unit, a light source type determination unit, and an illumination attenuation calculation unit, wherein:
and the light source index unit is used for acquiring target light sources corresponding to each divided block from the texture information.
And the light source type determining unit is used for acquiring the light source type of the target light source from the light source information of the target light source.
And the illumination attenuation calculation unit is used for determining an illumination attenuation algorithm through the light source type so as to calculate the illumination effect of the target light source on each divided block.
In one exemplary embodiment of the present disclosure, the illumination determination module may include a color calculation unit and a rendering unit, wherein:
and the color calculation unit is used for calculating the color value of each divided block by combining the illumination attenuation algorithm and the light source information.
And the rendering unit is used for rendering the divided blocks based on the color values so as to acquire the illumination effect of each divided block.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the above via execution of the executable instructions.
According to a fourth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method of any of the above.
Exemplary embodiments of the present disclosure may have some or all of the following advantages:
in the virtual light source processing method provided by an example embodiment of the present disclosure, a virtual scene is converted into a plurality of partition blocks by partitioning the virtual scene, so that texture information of each light source for each partition block is determined, and compared with coloring each light source through a rendering pipeline, algorithm complexity is greatly reduced, and processing efficiency can be improved; in addition, the scene depth data does not need to be processed, so that the resource consumption can be reduced, and the application range is wider.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 schematically illustrates a flow chart of a virtual light source processing method according to one embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a virtual light source processing method according to another embodiment of the present disclosure;
FIG. 3 schematically illustrates a virtual light source processing method flow diagram according to one embodiment of the disclosure;
FIG. 4 schematically illustrates a schematic diagram of a bounding box of a current field of view in an embodiment in accordance with the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of an intersection of a bounding box with a light range in one embodiment in accordance with the disclosure;
FIG. 6 schematically illustrates a schematic diagram of an intersection of a bounding box with a light range in another embodiment according to the present disclosure;
FIG. 7 schematically illustrates a virtual light source processing method flow diagram according to one embodiment of the disclosure;
FIG. 8 schematically illustrates a virtual light source processing method flow diagram according to one embodiment of the disclosure;
FIG. 9 schematically illustrates a data structure diagram of texture information according to one embodiment of the present disclosure;
FIG. 10 schematically illustrates a virtual light source processing method flow diagram according to one embodiment of the disclosure;
FIG. 11 schematically illustrates a virtual light source processing method flow diagram according to another embodiment of the present disclosure;
FIG. 12 schematically illustrates a virtual light source processing method flow diagram according to one embodiment of the disclosure;
FIG. 13 schematically illustrates a block diagram of a virtual light source processing apparatus according to one embodiment of the disclosure;
FIG. 14 schematically illustrates a system architecture diagram for implementing a virtual light source processing method according to one embodiment of the disclosure;
fig. 15 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The following describes the technical scheme of the embodiments of the present disclosure in detail:
conventional forward rendering pipeline renders n objects colored under m light sources, requiring m x n times of rendering, and performance is greatly degraded as the number of objects and light sources in a scene increases. In one solution idea provided by the inventor, the light source may be processed by a block-based rendering method, and the block-based rendering (Tile Based Rendering) is to pre-render the depth of the scene, then divide the screen space into multiple tiles (grids), for example, 32×32, and calculate which tiles are affected by the light source according to the scene depth data, thereby performing illumination coloring on each tile. Although block-based rendering can reduce the complexity of the algorithm, the depth of the scene needs to be obtained first, which doubles the number of Draw calls (commands for the CPU to Call the image programming interface) and is unacceptable for mobile platforms. In another solution idea provided by the inventor, the Draw Call can be avoided by calculating the depth of the previous frame, but the screen flicker can be caused by delaying one frame calculation, and the rendering error can occur if the player lens moves fast, so that the rendering effect is poor.
In view of one or more of the above problems, the present exemplary embodiment provides a virtual light source processing method. Referring to fig. 1, the method may include the steps of:
step S110: and dividing the virtual scene to obtain a plurality of division blocks of the virtual scene.
Step S120: and acquiring a plurality of light sources in the current visual field range, and determining texture information of each divided block according to the light source information of each light source.
Step S130: and determining the illumination effect of each divided block by using the texture information of each divided block.
In the virtual light source processing method provided by an example embodiment of the present disclosure, a virtual scene is converted into a plurality of partition blocks by partitioning the virtual scene, so that texture information of each light source for each partition block is determined, and compared with coloring each light source through a rendering pipeline, algorithm complexity is greatly reduced, and processing efficiency can be improved; in addition, the scene depth data does not need to be processed, so that the resource consumption can be reduced, and the application range is wider.
Next, the above steps of the present exemplary embodiment will be described in more detail.
In step S110, a virtual scene is divided, and a plurality of divided blocks of the virtual scene are obtained.
The virtual scene includes a three-dimensional or multidimensional space, and most of the game scenes do not have too large visual changes in the vertical direction, so that the virtual scene can be divided from the horizontal direction, for example, the virtual scene is divided according to the x-z direction (horizontal direction) of the spatial bounding box of the virtual scene, thereby obtaining a plurality of division blocks. The equally sized divided blocks can be obtained by uniform division, and the specification of the divided blocks can be determined according to the actual size of the virtual scene, for example, 32×32, 64×64, 128×128, or the like, or other specifications, for example, 8×8, or the like, according to the requirements, which is not particularly limited in this embodiment. The virtual scene can be converted into the 2D partition blocks through the partition, so that the complexity of an algorithm is reduced, the calculation efficiency is improved, the problem of light source overlapping in the horizontal direction can be solved by dividing the virtual scene in the horizontal direction, and a better illumination effect is realized.
In step S120, a plurality of light sources within the current field of view are acquired, and texture information of each of the divided blocks is determined according to the light source information of each of the light sources.
The current field of view refers to a range of a scene displayed on the current display screen, and a light source within the coordinate range can be acquired from the coordinate range currently displayed. Each element in the virtual scene has unique coordinates, so that the coordinates of all the light sources can be obtained, and the light sources in the current field of view can be determined. And, along with the operation of the user, the current visual field range of the user can be correspondingly changed, and the light source can be dynamically changed, for example, the current visual field range of the user is shifted out, if the current visual field range of the user is detected to be changed, the light source which falls in the visual field range after the change can be acquired again, and the update of the light source is kept. The light source information is information defined by a developer when the virtual scene is developed and designed, and specifically may include attribute information of the light source, such as an identification number, coordinates of the light source in the virtual scene, a light source type, a light source color, illumination intensity, and the like, and may also include other information, such as illumination attenuation information, illumination shape, and the like, which is not particularly limited in this embodiment.
The texture information may be used to determine the illumination effect on the divided blocks, and may specifically include index information of the light sources corresponding to the divided blocks, so that the illumination effect of the divided blocks is determined by the index information, for example, the index information of the divided block a is the light sources a, b, c, e, the four light sources, and the like; attribute information of the light source, such as light source color, a map corresponding to the light source, and the like, may also be included; the method can further comprise the illumination characteristics of the divided blocks, and the illumination characteristics can be used for determining the images of the illumination effects corresponding to the divided blocks. The light source information can be obtained by accessing the resource catalog corresponding to the virtual scene, and then the illumination characteristics of the light source for each divided block are calculated. Specifically, the color and the type of the light source can be determined through the light source information, the attenuation change of the light source of different types can be different, the more illumination is obtained from the dividing block which is closer to the light source, the distance between the dividing block and the light source can be calculated through the coordinates of the light source and the coordinates of the dividing block, and then the illumination characteristic under the distance is calculated through an illumination attenuation algorithm. And traversing each piece of light source information in the current visual field range in sequence, determining the illumination characteristics of the light source information for each divided block, and obtaining texture information on each divided block after traversing all the light source information in the current visual field range. For example, if the current field of view includes 5 light sources, for dividing the block a, the illumination characteristics a, b, c, d, e of each light source are sequentially calculated, so as to obtain texture information of a.
In an exemplary embodiment, the method may include step S201 and step S202, as shown in fig. 2, wherein:
in step S201, the illumination range of each light source is obtained from each light source information. The scope of care may refer to a range of sizes of the scene affected by the illumination of the light source. Specifically, the coordinates of the light source and the illumination type of the light source can be queried according to the light source information, so that the illumination range of the light source is determined, for example, if the coordinates of the light source are O, the illumination type is circular, and the corresponding illumination range is O+r, wherein r is the illumination radius. In addition, the illumination range of the light source can be obtained in other manners, for example, the size range affected by the light source is queried from the light source information, and then the partition blocks within the size range are determined according to the coordinates of the light source, so that the illumination range is obtained.
In step S202, from the plurality of light sources, determining a light source corresponding to each of the divided blocks according to the illumination range, so as to obtain texture information of the divided blocks according to the light source corresponding to the divided blocks. Specifically, determining the illumination range of each light source in the current field of view can determine the partition block affected by illumination of each light source, for each partition block, if the partition block is within the illumination range of one light source, determining that the partition block corresponds to the light source, traversing each light source to obtain all the light sources corresponding to the partition block, and storing all the corresponding light sources as texture information of the partition block. For example, for the partition block 1, when the illumination effect of the light sources a, c, f is received, the texture information of the partition block 1 may be the illumination information of the light sources a, c, f. Specifically, the method for determining the light source corresponding to each divided block may include the following steps S301 to S303, as shown in fig. 3, where:
In step S301, a bounding box of the divided blocks included in the current field of view is determined. For example, as shown in fig. 4, when the camera is determined as the starting point P0 and the current center of view P1 is configured with P1 as the center of the circle, a bounding box 401 may be obtained, where V0 is the direction of the line of sight of the camera. The actual size of the bounding box can be determined by the size of the partition block and the size of the texture image, and assuming that the size of the partition block is N and the size of the texture image is M, the coverage of all the partition blocks is nxm, for example, 128×128 for the texture image and 2 meters for each partition block, the coverage of the bounding box is 256 meters×256 meters. Since the scene outside the visual field range is not displayed currently, only the light sources corresponding to the dividing blocks in the current visual field range are calculated, so that the calculated amount is reduced, and the calculation efficiency is improved.
In step S302, for each light source, an intersection of the bounding box with the illumination range of the light source is calculated. The illumination range comprises the divided blocks affected by the light source, and the divided blocks affected by the light source in the divided blocks in the current field of view can be obtained through intersection operation of the illumination range and the divided blocks in the current field of view. For example, point light sources and spotlights are exemplified. As shown in fig. 4, if the shape of the point light source is a circle, after the circle is obtained from the light source information, performing intersection operation on the circle and the bounding box of the dividing block of the current field of view, where the intersection obtained is the illumination range of the point light source, as shown in fig. 5; and if the shape of the spotlight is triangular, performing intersection operation on the triangle and a bounding box in the current field of view to obtain the illumination range of the spotlight, as shown in fig. 6.
In step S303, if the divided block is included in the intersection, the divided block is corresponding to the light source. After the intersection of the illumination range of each light source and the current field of view is obtained by calculation, each divided block in the current field of view is judged, and the intersection containing the divided block is determined, for example, 10 light sources respectively obtain 10 intersections, and a divided block 1 is contained in intersections 1, 2 and 5, then the light sources respectively corresponding to 1, 2 and 5 are corresponding to the divided block 1 and are stored as texture information of the divided block 1.
Furthermore, it can be seen that the same divided block may be simultaneously affected by the illumination of a plurality of light sources, i.e. the divided block may correspond to a plurality of light sources. In the exemplary embodiment, determining the light source corresponding to the divided block further includes the following step S701 and step S702, as shown in fig. 7, where:
in step S701, it is determined whether the number of corresponding light sources included in the texture information of the divided block is a preset value. When determining the light sources corresponding to the divided blocks, the number of the light sources corresponding to the current time can be recorded through a parameter, namely, each time one light source is stored in the texture information of the divided blocks, the value of the parameter is added with 1, and then the parameter is judged, so that whether the value is a preset value or not is judged. The preset value may be determined according to actual requirements, for example, 4, 5, 6, etc., or may be other values, for example, 2, 8, etc., which is not limited in this embodiment.
In step S702, if the number of the corresponding light sources is a preset value, a light source to be discarded is selected from the corresponding light sources, so as to delete the light source to be discarded. If the number of the corresponding light sources is a preset value, the candidate new light sources needing to be added can be deleted. Because the index texture is 4 channels, the texture information of each light source can accommodate 4 indexes, so that the preset value can be 4, and when the light source corresponding to the divided block is greater than 4, the light source can be selected for deletion, so that the resource waste is reduced, and the power consumption is reduced.
The method for selecting the light source to be discarded from the corresponding light sources may include the following steps S801 to S803, as shown in fig. 8, specifically:
in step S801, the light source position and intensity of each corresponding light source are obtained from the light source information of each corresponding light source. The light source information can comprise coordinates and intensity parameters of the light sources, so that the positions and the intensities of the light sources can be obtained by inquiring the light source information of the light sources; or, corresponding intensity can be set for each type of light source in advance, and then the corresponding intensity of the light source type is searched according to the light source type of the light source. The farther from the light source position the less illumination is obtained, i.e. the illumination intensity decays with increasing distance.
In step S802, a contribution value of each of the corresponding light sources is calculated using the light source positions and intensities. For example, the contribution value of the corresponding light source on each divided block can be calculated by using the inverse square law of physics; the light attenuation algorithm may be calculated by other light attenuation algorithms or by a custom algorithm, which is not particularly limited in this embodiment.
In step S803, a light source to be discarded is selected from the already corresponding light sources according to the contribution value. For example, after the illumination contribution of the corresponding light source to the divided block is obtained, a reservation with a larger contribution can be selected, and the light source with the smallest contribution is used as the light source to be discarded for deletion. After deleting the light source with small contribution, a new light source can be added, so that the light source in the texture information of the divided blocks does not exceed a preset value, thereby being beneficial to reducing the artifact and improving the display effect.
For example, after determining the light sources corresponding to the divided blocks, traversing the light source information of each light source, and writing the light source information of each light source into the texture information, where the format of the texture information may be RGBA. Fig. 9 shows a data structure of texture information, in which light source information of a plurality of light sources is stored, as shown in fig. 9, four channels of one pixel may correspond to four light sources, and each channel may be 8 bits, and 255 light sources may be represented. Wherein, the LightPosIndRadius, four floating point numbers (float 4) can represent the position information (such as x, y, z) of the light source and the radius of the light source; lightColor, which may represent the color of the light source; spotDir, which may represent the direction of the spotlight; spotengles, which may represent the inside and outside angles of a spotlight, is used to calculate the light attenuation of the spotlight. In this embodiment, the process of determining texture information of the divided blocks runs on the CPU, and compared with the process of calculating on the GPU, the process has lower requirements on hardware, and is more convenient for debugging and expansion.
Next, with continued reference to fig. 1, in step S130, the illumination effect of each of the divided blocks is determined using the texture information of each of the divided blocks.
After the texture information on each divided block is determined, the illumination effect can be calculated according to the texture information when the virtual scene is rendered. For example, the illumination effect may be calculated in the Pixel loader, and first, the UV coordinates corresponding to the texture information may be calculated using the coordinates of the divided blocks, the sizes of the divided blocks, and the spatial coordinates of the pixels, so as to obtain the light source data in the texture information, and the RGBA values of the pixels are calculated by using the light source data, so as to obtain the illumination effect of the pixels. Specifically, the method may include steps S1001 to S1003, as shown in fig. 10, in which:
in step S1001, a target light source corresponding to each divided block is obtained from the texture information. Since the index information of each light source is stored in the texture information, the target light source corresponding to each divided block can be determined from the texture information. In addition, the partition block may not include texture information, and it may be determined whether the texture information includes a light source by a parameter, if the partition block is not affected by illumination of any light source, the value of the parameter may be determined to be 0, if the partition block is affected by the light source, the value of the parameter may be set to 1, if the value of the parameter is 0, it may be determined first when the target light source is acquired, and if the value is not 0, the target light source included therein may be acquired, thereby reducing the amount of computation.
In step S1002, a light source type of the target light source is acquired from light source information of the target light source. The light source information of the target light source may include a light source type of the target light source, for example, a point light source, a spotlight, etc., and different light source types are different in illumination conditions, so that the light source type of the target light source is obtained so as to calculate the illumination effect according to the light source type.
In step S1003, an illumination attenuation algorithm is determined by the light source type to calculate the illumination effect of the target light source for each of the divided blocks. In this embodiment, a plurality of light source types, for example, a point light source, a spotlight, or other types customized, etc., may be predetermined, and then a lighting attenuation algorithm corresponding to each light source type may be determined, where the lighting attenuation algorithm may include a physical theorem, or may include a customized method, for example, a linear attenuation of the lighting intensity with distance, etc., and the embodiment is not limited thereto. The RGBA value of the pixel contained in each divided block can be calculated through the illumination attenuation algorithm, so that the illumination effect on the pixel is determined. Specifically, the method for determining the lighting effect may include the following step S1101 and step S1102, as shown in fig. 11, in which:
In step S1101, color values of the respective divided blocks are calculated in combination with the illumination attenuation algorithm and the light source information. For example, the illumination attenuation algorithm may be calculating illumination, and the R value of the color channel may be obtained by calculating illumination (index data. R), where index data. R is the R value of the light source, and similarly, GBA values may be obtained in sequence, so as to obtain the color values of the divided blocks.
In step S1102, the divided blocks are rendered based on the color values to obtain lighting effects of the divided blocks. For example, pixel coloring may be performed on the object rendering GPU by using a Pixel loader, and a color value of each Pixel may be calculated in the Pixel loader and then the Pixel is colored, so as to determine an illumination effect of each partition block in the virtual scene. In addition, the virtual scene may be rendered by various rendering tools, for example, 3DMax, C4D, etc., which is not particularly limited in this embodiment.
In an exemplary embodiment, the method may include the following steps S1201 to S1206, as shown in fig. 12, in particular:
in step S1201, the virtual scene is divided to obtain divided blocks. In step S1202, a camera bounding box is calculated; taking the virtual reality application as an example, the camera bounding box is taken as a virtual scene in the current field of view, and the calculation method is shown in fig. 4. In step S1203, determining a light source in the current field of view and a partition block corresponding to the light source; and traversing all the light sources, determining whether an intersection exists between the light source range and the camera bounding box, and if so, determining that the partition blocks contained in the intersection correspond to the light sources. In step S1204, for each divided block, if the number of light sources corresponding to the divided block is greater than a preset value, the light sources are selected for deletion. In step S1205, texture information of the divided blocks is acquired. In step S1206, a lighting effect is calculated from the texture information. It should be understood that the steps shown in fig. 12 are correspondingly described in the above embodiments, and thus are not described herein.
In this embodiment, the illumination effect of the light source in the visual field range can be determined by segmenting the scene, and the light source outside the visual field range is removed, so that the computing resources can be reduced; and the problem of calculating the scene depth can be avoided, the cost of rendering the depth is reduced, and the problem of flicker of delay calculation is avoided. In addition, the process of determining the texture information can be performed on a CPU (Central processing Unit), an additional GPU (graphics processing Unit) channel is not needed, the requirement on hardware is low, the method can be applied to a mobile terminal, and the application range is wider.
Further, in this example embodiment, a virtual light source processing apparatus is further provided, which is configured to execute the virtual light source processing method disclosed above. The device can be applied to a server or terminal equipment.
Referring to fig. 13, the virtual light source processing apparatus 1300 may include: a scene division module 1310, a light source information determination module 1320, and an illumination determination module 1330, wherein:
the scene division module 1310 is configured to divide a virtual scene to obtain a plurality of division blocks of the virtual scene.
The light source information determining module 1320 is configured to obtain a plurality of light sources in a current field of view, and determine texture information of each of the divided blocks according to light source information of each of the light sources.
The illumination determining module 1330 is configured to determine an illumination effect of each of the divided blocks using texture information of each of the divided blocks.
In one exemplary embodiment of the present disclosure, the light source information determining module 1320 may include an illumination range determining unit and a texture determining unit, wherein:
and the illumination range determining unit is used for acquiring the illumination range of each light source from each light source information.
And the texture determining unit is used for determining the light source corresponding to each divided block from the plurality of light sources according to the illumination range so as to determine the texture information of the divided block according to the light source corresponding to the divided block.
In one exemplary embodiment of the present disclosure, the texture determining unit may include a field of view determining unit, an intersection calculating unit, and a correspondence determining unit, wherein:
and the visual field range determining unit is used for determining a bounding box of the dividing block contained in the current visual field range.
And the intersection calculating unit is used for calculating the intersection of the bounding box and the illumination range of the light source for each light source.
And the corresponding relation determining unit is used for corresponding the dividing block with the light source if the dividing block is contained in the intersection.
In an exemplary embodiment of the present disclosure, the apparatus further includes a light source number determining module and a light source rejecting module, wherein:
the light source number determining module is used for determining whether the number of the corresponding light sources contained in the texture information of the divided blocks is a preset value or not.
And the light source eliminating module is used for selecting the light sources to be discarded from the corresponding light sources if the number of the corresponding light sources is a preset value so as to delete the light sources to be discarded.
In one exemplary embodiment of the present disclosure, the light source rejection module may include a light source intensity acquisition unit, a contribution value determination unit, and a light source selection unit, wherein:
the light source intensity obtaining unit is used for obtaining the light source position and intensity of each corresponding light source through the light source information of each corresponding light source.
And the contribution value determining unit is used for calculating the contribution value of each corresponding light source by using the light source position and the intensity.
And the light source selection unit is used for selecting the light source to be discarded from the corresponding light sources according to the contribution value.
In one exemplary embodiment of the present disclosure, the scene division module 1310 may be specifically configured to: and dividing the space data in the horizontal direction of the virtual scene to obtain the plurality of divided blocks.
In an exemplary embodiment of the present disclosure, the apparatus further comprises a light source updating module for updating a plurality of light sources within the current field of view in response to a change in the field of view of the user.
In one exemplary embodiment of the present disclosure, the illumination determination module 1330 may include a light source index unit, a light source type determination unit, and an illumination attenuation calculation unit, wherein:
and the light source index unit is used for acquiring target light sources corresponding to each divided block from the texture information.
And the light source type determining unit is used for acquiring the light source type of the target light source from the light source information of the target light source.
And the illumination attenuation calculation unit is used for determining an illumination attenuation algorithm through the light source type so as to calculate the illumination effect of the target light source on each divided block.
In one exemplary embodiment of the present disclosure, the illumination determination module 1330 may include a color calculation unit and a rendering unit, wherein:
and the color calculation unit is used for calculating the color value of each divided block by combining the illumination attenuation algorithm and the light source information.
And the rendering unit is used for rendering the divided blocks based on the color values so as to acquire the illumination effect of each divided block.
Since each functional module of the virtual light source processing apparatus according to the exemplary embodiment of the present disclosure corresponds to a step of the exemplary embodiment of the virtual light source processing method described above, for details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the virtual light source processing method described above in the present disclosure.
Referring to fig. 14, fig. 14 is a schematic diagram illustrating a system architecture of an exemplary application environment to which a virtual light source processing method and a virtual light source processing apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 14, the system architecture 1400 may include one or more of the terminal devices 1401, 1402, 1403, a network 1404, and a server 1405. The network 1404 serves as a medium to provide communications links between the terminal devices 1401, 1402, 1403 and the server 1405. The network 1404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 1401, 1402, 1403 may be a variety of electronic devices with display screens including, but not limited to, desktop computers, portable computers, smartphones, tablet computers, and the like. It should be understood that the number of terminal devices, networks and servers in fig. 14 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 1405 may be a server cluster including a plurality of servers.
The virtual light source processing method provided in the embodiments of the present disclosure is generally executed by the server 1405, and accordingly, the virtual light source processing apparatus is generally disposed in the server 1405. However, it is easily understood by those skilled in the art that the virtual light source processing method provided in the embodiment of the present disclosure may be performed by the terminal devices 1401, 1402, 1403, and accordingly, the virtual light source processing apparatus may be provided in the terminal devices 1401, 1402, 1403, which is not particularly limited in the present exemplary embodiment.
For example, in an exemplary embodiment, the terminal device 1401, 1402, 1403 may divide the virtual scene, obtain a plurality of divided blocks, determine a light source within a current field of view of the user, determine texture information of the divided blocks through the light source, and further render the virtual scene by using the texture information, so as to render an illumination effect in the virtual scene; the virtual scene is closer to the real scene, so that the user experience is more real, and the immersion of the user is improved.
Fig. 15 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
It should be noted that, the computer system 1500 of the electronic device shown in fig. 15 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 15, the computer system 1500 includes a Central Processing Unit (CPU) 1501, which can execute various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1502 or a program loaded from a storage section 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data required for the operation of the system are also stored. The CPU 1501, ROM 1502, and RAM 1503 are connected to each other through a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
The following components are connected to I/O interface 1505: an input section 1506 including a keyboard, mouse, and the like; an output portion 1507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. Removable media 1511, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1510 as needed so that a computer program read therefrom is mounted into the storage section 1508 as needed.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1509, and/or installed from the removable medium 1511. When executed by a Central Processing Unit (CPU) 1501, performs the various functions defined in the methods and apparatus of the present application.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1 and 2, and so on.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A virtual light source processing method, comprising:
dividing a virtual scene to obtain a plurality of division blocks of the virtual scene;
acquiring a plurality of light sources in a current visual field range, and determining texture information of each divided block according to light source information of each light source;
determining the illumination effect of each divided block by using the texture information of each divided block;
the dividing the virtual scene to obtain a plurality of divided blocks of the virtual scene includes:
dividing the space data in the horizontal direction of the virtual scene to obtain a plurality of dividing blocks;
wherein the determining texture information of each of the divided blocks according to the light source information of each of the light sources includes:
acquiring the illumination range of each light source from each light source information; the illumination range refers to the size range of a scene affected by illumination of the light source;
Determining a light source corresponding to each divided block from the plurality of light sources according to the illumination range, so as to determine texture information of the divided block according to the light source corresponding to the divided block;
wherein determining, from the plurality of light sources, the light source corresponding to each of the divided blocks according to the illumination range, so as to determine texture information of the divided blocks according to the light source corresponding to the divided blocks, includes:
for each divided block, if the divided block is within the illumination range of the light source, determining that the divided block corresponds to the light source, and traversing each light source to obtain all the light sources corresponding to the divided block; storing all light sources corresponding to the dividing blocks as texture information of the dividing blocks;
the texture information corresponding to the divided blocks comprises index information of the light sources corresponding to the divided blocks, attribute information of the light sources corresponding to the divided blocks and illumination characteristics of the light sources corresponding to the divided blocks.
2. The method of claim 1, wherein determining the light source corresponding to each of the divided blocks according to the illumination range comprises:
Determining bounding boxes of the divided blocks contained in the current field of view;
for each light source, calculating an intersection of the bounding box with an illumination range of the light source;
and if the divided block is included in the intersection, the divided block is corresponding to the light source.
3. The method of claim 2, further comprising, prior to the mapping the partitioned area to the light source:
determining whether the number of the corresponding light sources contained in the texture information of the divided blocks is a preset value;
and if the number of the corresponding light sources is a preset value, selecting the light sources to be discarded from the corresponding light sources so as to delete the light sources to be discarded.
4. A method according to claim 3, wherein said selecting a light source to be discarded from said corresponding light sources comprises:
acquiring the light source position and intensity of each corresponding light source through the light source information of each corresponding light source;
calculating a contribution value of each corresponding light source by utilizing the light source position and the intensity;
and selecting a light source to be discarded from the corresponding light sources according to the contribution value.
5. The method according to claim 1, wherein the method further comprises:
In response to a change in the user field of view, a plurality of light sources within the current field of view are updated.
6. The method of claim 1, wherein determining the lighting effect of each of the partitioned blocks using the texture information of each of the partitioned blocks comprises:
obtaining target light sources corresponding to all the dividing blocks from the texture information;
acquiring the light source type of the target light source from the light source information of the target light source;
and determining an illumination attenuation algorithm through the light source type so as to calculate the illumination effect of the target light source on each divided block.
7. The method of claim 6, wherein determining the lighting effect of each of the partitioned blocks using the texture information of each of the partitioned blocks comprises:
calculating the color value of each divided block by combining the illumination attenuation algorithm and the light source information;
rendering the divided blocks based on the color values to obtain illumination effects of the divided blocks.
8. A virtual light source processing apparatus, comprising:
the scene dividing module is used for dividing the virtual scene to obtain a plurality of dividing blocks of the virtual scene;
The light source information determining module is used for obtaining a plurality of light sources in the current visual field range and determining texture information of each divided block according to the light source information of each light source;
the illumination determining module is used for determining illumination effects of the divided blocks by utilizing texture information of the divided blocks;
wherein the scene division module is configured to:
dividing the space data in the horizontal direction of the virtual scene to obtain a plurality of dividing blocks;
wherein, the light source information determining module includes:
an illumination range determining unit that obtains an illumination range of each of the light sources from each of the light source information; the illumination range refers to the size range of a scene affected by illumination of the light source;
a texture determining unit, configured to determine, from the plurality of light sources, a light source corresponding to each of the divided blocks according to the illumination range, so as to determine texture information of the divided blocks according to the light source corresponding to the divided blocks;
wherein the texture determining unit is configured to:
for each divided block, if the divided block is within the illumination range of the light source, determining that the divided block corresponds to the light source, and traversing each light source to obtain all the light sources corresponding to the divided block; storing all light sources corresponding to the dividing blocks as texture information of the dividing blocks;
The texture information corresponding to the divided blocks comprises index information of the light sources corresponding to the divided blocks, attribute information of the light sources corresponding to the divided blocks and illumination characteristics of the light sources corresponding to the divided blocks.
9. A computer readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-7 via execution of the executable instructions.
CN202010055325.5A 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment Active CN111260766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010055325.5A CN111260766B (en) 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010055325.5A CN111260766B (en) 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111260766A CN111260766A (en) 2020-06-09
CN111260766B true CN111260766B (en) 2024-03-15

Family

ID=70948930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010055325.5A Active CN111260766B (en) 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111260766B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111760277B (en) * 2020-07-06 2024-05-28 网易(杭州)网络有限公司 Illumination rendering method and device
US11663773B2 (en) 2020-08-21 2023-05-30 Nvidia Corporation Using importance resampling to reduce the memory incoherence of light sampling
US20220058851A1 (en) * 2020-08-21 2022-02-24 Nvidia Corporation Grid-based light sampling for ray tracing applications
CN111915712B (en) * 2020-08-28 2024-05-28 网易(杭州)网络有限公司 Illumination rendering method and device, computer readable medium and electronic equipment
CN112562051B (en) * 2020-11-30 2023-06-27 腾讯科技(深圳)有限公司 Virtual object display method, device, equipment and storage medium
CN112882677A (en) * 2021-02-08 2021-06-01 洲磊新能源(深圳)有限公司 Technical method for processing RGB LED multi-color light source
CN112819938A (en) * 2021-02-09 2021-05-18 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium
CN113052950B (en) * 2021-03-31 2021-12-17 完美世界(北京)软件科技发展有限公司 Illumination calculation method and device, computer equipment and computer readable storage medium
CN114399425A (en) * 2021-12-23 2022-04-26 北京字跳网络技术有限公司 Image processing method, video processing method, device, equipment and medium
CN114119853B (en) * 2022-01-26 2022-06-10 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium
CN115082611B (en) * 2022-08-18 2022-11-11 腾讯科技(深圳)有限公司 Illumination rendering method, apparatus, device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078332A (en) * 1997-01-28 2000-06-20 Silicon Graphics, Inc. Real-time lighting method using 3D texture mapping
CN105335996A (en) * 2014-06-30 2016-02-17 北京畅游天下网络技术有限公司 Light irradiation effect calculation method and device
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game
CN110310356A (en) * 2019-06-26 2019-10-08 北京奇艺世纪科技有限公司 A kind of scene rendering method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078332A (en) * 1997-01-28 2000-06-20 Silicon Graphics, Inc. Real-time lighting method using 3D texture mapping
CN105335996A (en) * 2014-06-30 2016-02-17 北京畅游天下网络技术有限公司 Light irradiation effect calculation method and device
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game
CN110310356A (en) * 2019-06-26 2019-10-08 北京奇艺世纪科技有限公司 A kind of scene rendering method and apparatus

Also Published As

Publication number Publication date
CN111260766A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111260766B (en) Virtual light source processing method, device, medium and electronic equipment
US10347042B2 (en) Importance sampling of sparse voxel octrees
US20150228110A1 (en) Volume rendering using adaptive buckets
CN111612882B (en) Image processing method, image processing device, computer storage medium and electronic equipment
CN109992640B (en) Method, device, equipment and storage medium for determining position grid
KR20130135309A (en) Data storage address assignment for graphics processing
CN111210497B (en) Model rendering method and device, computer readable medium and electronic equipment
CN112288873A (en) Rendering method and device, computer readable storage medium and electronic equipment
CN112891946B (en) Game scene generation method and device, readable storage medium and electronic equipment
CN111915712B (en) Illumination rendering method and device, computer readable medium and electronic equipment
US9235663B2 (en) Method for computing the quantity of light received by a participating media, and corresponding device
CN114627239B (en) Bounding box generation method, device, equipment and storage medium
CN109960887B (en) LOD-based model making method and device, storage medium and electronic equipment
CN113205601A (en) Roaming path generation method and device, storage medium and electronic equipment
CN110930492B (en) Model rendering method, device, computer readable medium and electronic equipment
CN111870953A (en) Height map generation method, device, equipment and storage medium
KR102176511B1 (en) Apparatus and Method for processing image
CN111569418B (en) Rendering method, device and medium for content to be output and electronic equipment
CN114419299A (en) Virtual object generation method, device, equipment and storage medium
CN111223105B (en) Image processing method and device
CN111790151A (en) Method and device for loading object in scene, storage medium and electronic equipment
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN116894933B (en) Three-dimensional model comparison method, device, equipment and storage medium
CN111145358A (en) Image processing method, device and hardware device
CN115761026A (en) Model baking method, model baking device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant