CN111260766A - Virtual light source processing method, device, medium and electronic equipment - Google Patents

Virtual light source processing method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111260766A
CN111260766A CN202010055325.5A CN202010055325A CN111260766A CN 111260766 A CN111260766 A CN 111260766A CN 202010055325 A CN202010055325 A CN 202010055325A CN 111260766 A CN111260766 A CN 111260766A
Authority
CN
China
Prior art keywords
light source
determining
illumination
information
light sources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010055325.5A
Other languages
Chinese (zh)
Other versions
CN111260766B (en
Inventor
李籽良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010055325.5A priority Critical patent/CN111260766B/en
Publication of CN111260766A publication Critical patent/CN111260766A/en
Application granted granted Critical
Publication of CN111260766B publication Critical patent/CN111260766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a virtual light source processing method, a virtual light source processing apparatus, a computer readable medium, and an electronic device; relates to the technical field of image processing. The method comprises the following steps: dividing a virtual scene to obtain a plurality of divided blocks of the virtual scene; acquiring a plurality of light sources in the current view range, and determining texture information of each partition block according to light source information of each light source; and determining the illumination effect of each divided block by using the texture information of each divided block. The virtual light source processing method disclosed by the disclosure can overcome the problem of higher pressure on the GPU caused by the rendering of the dynamic light source to a certain extent, and further reduces the resource consumption in the rendering process.

Description

Virtual light source processing method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a virtual light source processing method, a virtual light source processing apparatus, a computer readable medium, and an electronic device.
Background
To make the light source effect in a scene more realistic, giving the user a more realistic visual experience, computer applications start using dynamic light sources. Some high-end applications need to process hundreds of dynamic light sources per frame, and the processing efficiency requirement for the dynamic light sources is very high. The rendering requirements of dynamic light sources can be achieved by the rendering pipeline on the host or on a higher-configured device, but the rendering pipeline algorithm is very complex, which is unacceptable for mobile platforms or lower-configured devices.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a virtual light source processing method, a virtual light source processing apparatus, a computer readable medium, and an electronic device, which can overcome the problem that a mobile platform cannot be compatible due to a high complexity of light source processing to a certain extent, and further improve light source processing efficiency in a scene.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a virtual light source processing method, including:
dividing a virtual scene to obtain a plurality of divided blocks of the virtual scene;
acquiring a plurality of light sources in the current view range, and determining texture information of each partition block according to light source information of each light source;
and determining the illumination effect of each divided block by using the texture information of each divided block.
In an exemplary embodiment of the present disclosure, the determining texture information of each of the divided blocks according to light source information of each of the light sources includes:
acquiring the illumination range of each light source from each light source information;
and determining a light source corresponding to each of the divided blocks according to the illumination range from the plurality of light sources, so as to determine texture information of the divided blocks according to the light sources corresponding to the divided blocks.
In an exemplary embodiment of the present disclosure, the determining a light source corresponding to each of the divided blocks according to the illumination range includes:
determining a bounding box for dividing blocks contained in the current view range;
for each light source, calculating the intersection of the bounding box and the illumination range of the light source;
and if the division block is contained in the intersection, corresponding the division block with the light source.
In an exemplary embodiment of the present disclosure, before the corresponding the division block to the light source, further includes:
determining whether the number of the corresponding light sources contained in the texture information of the divided blocks is a preset value;
and if the number of the corresponding light sources is a preset value, selecting the light sources to be discarded from the corresponding light sources so as to delete the light sources to be discarded.
In an exemplary embodiment of the present disclosure, the selecting a light source to be discarded from the corresponding light sources includes:
acquiring the light source position and intensity of each corresponding light source according to the light source information of each corresponding light source;
calculating the contribution value of each corresponding light source by using the position and the intensity of the light source;
and selecting a light source to be discarded from the corresponding light sources according to the contribution value.
In an exemplary embodiment of the present disclosure, the dividing the scene space to obtain a plurality of divided blocks of the scene space includes:
and dividing the spatial data of the virtual scene in the horizontal direction to obtain the plurality of divided blocks.
In an exemplary embodiment of the present disclosure, the method further comprises:
updating the plurality of light sources within the current field of view in response to a change in the user field of view.
In an exemplary embodiment of the present disclosure, the determining the lighting effect of each of the divided blocks by using the texture information of each of the divided blocks includes:
acquiring a target light source corresponding to each divided block from the texture information;
acquiring the light source type of the target light source from the light source information of the target light source;
and determining an illumination attenuation algorithm according to the light source type to calculate the illumination effect of the target light source on each partitioned block.
In an exemplary embodiment of the present disclosure, the determining the lighting effect of each of the divided blocks by using the texture information of each of the divided blocks includes:
calculating color values of the divided blocks by combining the illumination attenuation algorithm and the light source information;
rendering the partition blocks based on the color values to obtain the illumination effect of each partition block.
According to a second aspect of the present disclosure, there is provided a virtual light source processing apparatus, including a scene dividing module, a light source information determining module, and an illumination determining module, wherein:
the scene dividing module is used for dividing a virtual scene to obtain a plurality of divided blocks of the virtual scene;
the light source information determining module is used for acquiring a plurality of light sources in the current visual field range and determining the texture information of each partition block according to the light source information of each light source;
and the illumination determining module is used for determining the illumination effect of each partition block by using the texture information of each partition block.
In an exemplary embodiment of the present disclosure, the light source information determining module may include an illumination range determining unit and a texture determining unit, wherein:
and the illumination range determining unit is used for acquiring the illumination range of each light source from each light source information.
And the texture determining unit is used for determining the light source corresponding to each divided block according to the illumination range from the plurality of light sources so as to determine the texture information of the divided blocks according to the light sources corresponding to the divided blocks.
In an exemplary embodiment of the present disclosure, the texture determining unit may include a view range determining unit, an intersection calculating unit, and a correspondence determining unit, wherein:
and the visual field range determining unit is used for determining the bounding box of the partition blocks contained in the current visual field range.
And the intersection calculation unit is used for calculating the intersection of the bounding box and the illumination range of the light source for each light source.
A correspondence determining unit configured to, if the divided block is included in the intersection, correspond the divided block to the light source.
In an exemplary embodiment of the present disclosure, the apparatus further includes a light source number determining module and a light source rejecting module, wherein:
and the light source quantity determining module is used for determining whether the quantity of the corresponding light sources contained in the texture information of the divided blocks is a preset value.
And the light source eliminating module is used for selecting the light source to be discarded from the corresponding light sources to delete the light source to be discarded if the number of the corresponding light sources is a preset value.
In an exemplary embodiment of the present disclosure, the light source rejecting module may include a light source intensity obtaining unit, a contribution value determining unit, and a light source selecting unit, wherein:
and the light source intensity acquisition unit is used for acquiring the light source position and the intensity of each corresponding light source according to the light source information of each corresponding light source.
And the contribution value determining unit is used for calculating the contribution value of each corresponding light source by using the light source position and the intensity.
And the light source selection unit is used for selecting the light source to be discarded from the corresponding light source according to the contribution value.
In an exemplary embodiment of the present disclosure, the scene division module may be specifically configured to: and dividing the spatial data of the virtual scene in the horizontal direction to obtain the plurality of divided blocks.
In an exemplary embodiment of the present disclosure, the apparatus further includes a light source updating module for updating the plurality of light sources in the current field of view in response to a change in the user field of view.
In an exemplary embodiment of the present disclosure, the illumination determination module may include a light source indexing unit, a light source type determination unit, and an illumination attenuation calculation unit, wherein:
and the light source index unit is used for acquiring the target light source corresponding to each divided block from the texture information.
And the light source type determining unit is used for acquiring the light source type of the target light source from the light source information of the target light source.
And the illumination attenuation calculating unit is used for determining an illumination attenuation algorithm according to the light source type so as to calculate the illumination effect of the target light source on each partition block.
In an exemplary embodiment of the present disclosure, the illumination determination module may include a color calculation unit and a rendering unit, wherein:
and the color calculation unit is used for calculating the color value of each divided block by combining the illumination attenuation algorithm and the light source information.
And the rendering unit is used for rendering the partition blocks based on the color values so as to obtain the illumination effect of each partition block.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any one of the above via execution of the executable instructions.
According to a fourth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
Exemplary embodiments of the present disclosure may have some or all of the following benefits:
in the virtual light source processing method provided by an exemplary embodiment of the present disclosure, a virtual scene is converted into a plurality of partitioned blocks by partitioning the virtual scene, so as to determine texture information of each light source for each partitioned block, and compared with the method in which each light source is respectively colored by a rendering pipeline, the algorithm complexity is greatly reduced, and the processing efficiency can be improved; moreover, scene depth data does not need to be processed, resource consumption can be reduced, and the application range is wider.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically shows a flow diagram of a virtual light source processing method according to one embodiment of the present disclosure;
fig. 2 schematically shows a flow diagram of a virtual light source processing method according to another embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram of a virtual light source processing method according to one embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a bounding box for a current field of view in one embodiment according to the present disclosure;
FIG. 5 schematically shows a schematic diagram of the intersection of bounding boxes with illumination ranges in an embodiment in accordance with the present disclosure;
FIG. 6 schematically shows a schematic diagram of the intersection of bounding boxes with illumination ranges in another embodiment according to the present disclosure;
FIG. 7 schematically illustrates a flow diagram of a virtual light source processing method according to one embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow diagram of a virtual light source processing method according to one embodiment of the present disclosure;
FIG. 9 is a diagram that schematically illustrates a data structure for texture information, in accordance with an embodiment of the present disclosure;
FIG. 10 schematically illustrates a flow diagram of a virtual light source processing method according to one embodiment of the present disclosure;
FIG. 11 schematically illustrates a flow diagram of a virtual light source processing method according to another embodiment of the present disclosure;
FIG. 12 schematically illustrates a flow diagram of a virtual light source processing method according to one embodiment of the present disclosure;
fig. 13 schematically illustrates a block diagram of a virtual light source processing apparatus according to one embodiment of the present disclosure;
FIG. 14 schematically illustrates a system architecture diagram for implementing a virtual light source processing method according to one embodiment of the present disclosure;
FIG. 15 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The technical solution of the embodiment of the present disclosure is explained in detail below:
the conventional forward rendering pipeline renders n objects under m light sources, and needs to draw m × n times, so that the performance is greatly reduced when the number of the objects and the light sources in a scene is increased. In one solution provided by the inventor, the light source may be processed by a Rendering method Based on a partition, and an idea of Rendering Based on a partition (Tile Based Rendering) is to render a depth of a scene in advance, then divide a screen space into a plurality of tiles (e.g., 32 × 32), and calculate which tiles are affected by the light source according to scene depth data, so as to perform illumination Rendering on each Tile. Although the complexity of the algorithm can be reduced by the block-based rendering, the depth of the scene needs to be obtained first, which doubles the number of Draw calls (commands for calling the image programming interface by the CPU) and is unacceptable for the mobile platform. In another solution provided by the inventor, Draw Call can be avoided by using the depth of the previous frame for calculation, but delaying the calculation for one frame can cause screen flicker, and rendering errors can occur if the player takes a shot with fast motion, resulting in poor rendering effect.
In view of one or more of the above problems, the present example embodiment provides a virtual light source processing method. Referring to fig. 1, the method may include the steps of:
step S110: the method comprises the steps of dividing a virtual scene to obtain a plurality of divided blocks of the virtual scene.
Step S120: and acquiring a plurality of light sources in the current visual field range, and determining the texture information of each divided block according to the light source information of each light source.
Step S130: and determining the illumination effect of each divided block by using the texture information of each divided block.
In the virtual light source processing method provided by an exemplary embodiment of the present disclosure, a virtual scene is converted into a plurality of partitioned blocks by partitioning the virtual scene, so as to determine texture information of each light source for each partitioned block, and compared with the method in which each light source is respectively colored by a rendering pipeline, the algorithm complexity is greatly reduced, and the processing efficiency can be improved; moreover, scene depth data does not need to be processed, resource consumption can be reduced, and the application range is wider.
The above steps of the present exemplary embodiment will be described in more detail below.
In step S110, a virtual scene is divided, and a plurality of divided blocks of the virtual scene are obtained.
The virtual scene includes a three-dimensional or multi-dimensional space, and taking the game scene as an example, most of the game scenes do not have too large visual change in the vertical direction, so the virtual scene can be divided from the horizontal direction, for example, the virtual scene is divided according to the x-z direction (horizontal direction) of the spatial bounding box of the virtual scene, so as to obtain a plurality of divided blocks. The partitioned blocks with equal size can be obtained by uniform partitioning, and the specification of the partitioned blocks can be determined according to the actual size of the virtual scene, for example, 32 × 32, 64 × 64, 128 × 128, or the like, or determined according to other specifications, for example, 8 × 8, or the like, which is not particularly limited in this embodiment. The virtual scene can be converted into the 2D partition blocks through partitioning, so that the complexity of an algorithm is reduced, the calculation efficiency is improved, the problem of light source overlapping in the horizontal direction can be solved by partitioning the virtual scene in the horizontal direction, and a better illumination effect is realized.
In step S120, a plurality of light sources in the current view range are acquired, and texture information of each of the divided blocks is determined according to light source information of each of the light sources.
The current visual field range refers to a range of a scene displayed on the current display screen, and the light source within the coordinate range can be acquired according to the currently displayed coordinate range. Each element in the virtual scene has unique coordinates, so that the coordinates of all light sources can be acquired, and the light source in the current visual field range can be determined. And, along with the operation of the user, the current field of view of the user will change accordingly, and the light source will also change dynamically, for example, move out of the field of view of the user, and if it is detected that the current field of view of the user changes, the light source that falls within the field of view after the change can be re-acquired, and the update of the light source is maintained. The light source information is information defined by a developer when the virtual scene is developed and designed, and may specifically include attribute information of the light source, such as an identification number, coordinates of the light source in the virtual scene, a light source type, a light source color, illumination intensity, and the like, and may further include other information, such as illumination attenuation information, illumination shape, and the like, which is not particularly limited in this embodiment.
The texture information may be used to determine the lighting effect on the partition block, and specifically may include index information of the light source corresponding to the partition block, so as to determine the lighting effect of the partition block through the index information, for example, the index information of the partition block a is four light sources, i.e., light sources a, b, c, and e; attribute information of the light source, such as a color of the light source, a map corresponding to the light source, and the like; the lighting characteristics of the divided blocks can be further included, and the images of the lighting effects corresponding to the divided blocks can be determined through the lighting characteristics. The light source information can be obtained by accessing the resource directory corresponding to the virtual scene, and then the illumination characteristics of the light source to each partitioned block are calculated. Specifically, the color and the type of the light source can be determined through the light source information, the attenuation changes of the light sources of different types can be different, the closer the light source, the more the illumination is obtained by the partition block, the distance between the partition block and the light source can be calculated through the coordinates of the light source and the coordinates of the partition block, and then the illumination characteristic at the distance can be calculated through an illumination attenuation algorithm. And traversing each piece of light source information in the current view range in sequence, determining the illumination characteristics of the light source information for each divided block, and obtaining the texture information on each divided block after traversing all the light source information in the current view range. For example, if the current view range includes 5 light sources, for the partition block a, the illumination characteristics a, b, c, d, e of the light sources are sequentially calculated, so as to obtain texture information of a.
In an exemplary embodiment, the method may include step S201 and step S202, as shown in fig. 2, wherein:
in step S201, an illumination range of each light source is acquired from each light source information. The illumination range may refer to a range of sizes of the scene affected by illumination of the light source. Specifically, the coordinates of the light source and the illumination type of the light source may be queried according to the light source information, so as to determine the illumination range of the light source, for example, if the coordinates of the light source is O, the illumination type is circular, the corresponding illumination range is O + r, where r is the radius of illumination. In addition, the illumination range of the light source may be obtained in other manners, for example, a size range affected by the light source is queried from the light source information, and then a partition block within the size range is determined according to the coordinates of the light source, so as to obtain the illumination range.
In step S202, a light source corresponding to each of the divided blocks is determined according to the illumination range from the plurality of light sources, so as to obtain texture information of the divided blocks according to the light sources corresponding to the divided blocks. Specifically, the illumination range of each light source in the current view range is determined, so that a partition block affected by the illumination of each light source can be determined, for each partition block, if the partition block is in the illumination range of one light source, the partition block can be determined to correspond to the light source, all light sources corresponding to the partition block can be obtained after traversing each light source, and all corresponding light sources can be stored to be used as texture information of the partition block. For example, for the partition block 1, the texture information of the partition block 1 may be the illumination information of the light sources a, c, and f, if the partition block 1 is affected by the illumination of the light sources a, c, and f. Specifically, the method for determining the light source corresponding to each divided block may include the following steps S301 to S303, as shown in fig. 3, wherein:
in step S301, a bounding box for dividing a block included in the current visual field is determined. For example, as shown in fig. 4, the camera is determined as the starting point P0, and the current view center P1 forms a circle with P1 as the center, so as to obtain the bounding box 401, where V0 is the line of sight direction of the camera. The actual size of the bounding box can be determined by the size of the partition block and the size of the texture image, and assuming that the size of the partition block is N and the size of the texture image is M, the coverage of all the partition blocks is N × M, for example, the coverage of the bounding box is 256 meters × 256 meters if the texture image is 128 × 128 and each partition block is 2 meters. Because the scene outside the visual field range is not displayed at present, only the light sources corresponding to the division blocks in the current visual field range are calculated, which is favorable for reducing the calculation amount and improving the calculation efficiency.
In step S302, for each light source, the intersection of the bounding box and the illumination range of the light source is calculated. The illumination range contains the partitioned blocks affected by the light source, and the partitioned blocks affected by the light source in the partitioned blocks in the current view range can be obtained by solving intersection operation of the illumination range and the partitioned blocks in the current view range. For example, point light sources and spotlights are taken as examples. As shown in fig. 4, if the shape of the point light source is a circle, after the circle is obtained by querying the light source information, performing intersection operation on the circle and the bounding box of the partition block of the current view range, and obtaining an intersection as the illumination range of the point light source, as shown in fig. 5; if the shape of the spotlight is a triangle, the intersection operation is performed on the triangle and the bounding box in the current view range to obtain the illumination range of the spotlight, as shown in fig. 6.
In step S303, if the divided block is included in the intersection, the divided block is associated with the light source. After the intersection of the illumination range of each light source and the current view range is obtained through calculation, each partitioned block in the current view range is judged, the intersection containing the partitioned block is determined, for example, 10 light sources respectively obtain 10 intersections, and if the partitioned block 1 is contained in the intersections 1, 2 and 5, the light sources respectively corresponding to 1, 2 and 5 are corresponding to the partitioned block 1, and stored as the texture information of the partitioned block 1.
Furthermore, it can be seen that the same divided block can be simultaneously affected by illumination from multiple light sources, i.e., the divided block can correspond to multiple light sources. In an exemplary embodiment, the determining the light sources corresponding to the divided blocks further includes the following steps S701 and S702, as shown in fig. 7, where:
in step S701, it is determined whether the number of already-corresponding light sources included in the texture information of the divided blocks is a preset value. When the light sources corresponding to the divided blocks are determined, the number of the light sources corresponding to the divided blocks can be recorded through a parameter, that is, each light source is stored in the texture information of the divided blocks, the value of the parameter is added with 1, and then the parameter is judged to judge whether the value is a preset value or not. The preset value may be determined according to actual requirements, for example, 4, 5, 6, and the like, and may also be other values, for example, 2, 8, and the like, which is not particularly limited in this embodiment.
In step S702, if the number of the corresponding light sources is a preset value, a light source to be discarded is selected from the corresponding light sources, so as to delete the light source to be discarded. If the number of the corresponding light sources is a preset value, the candidate new light sources needing to be added can be deleted. Since the index texture is 4 channels, the texture information of each light source can contain 4 indexes, so that the preset value can be 4, and when the light source corresponding to the partition block is larger than 4, the light source can be selected to be deleted, thereby reducing resource waste and power consumption.
The method for selecting the light source to be discarded from the corresponding light sources may include the following steps S801 to S803, as shown in fig. 8, specifically:
in step S801, the light source position and intensity of each corresponding light source are acquired according to the light source information of each corresponding light source. The light source information can include coordinates and intensity parameters of the light sources, so that the position and the intensity of each light source can be obtained by inquiring the light source information of each light source; or, the intensity corresponding to each type of light source may be set in advance, and then the intensity corresponding to the light source type is searched according to the light source type of the light source. The farther from the light source location, the less illumination is obtained, i.e., the illumination intensity decays with increasing distance.
In step S802, the contribution value of each of the corresponding light sources is calculated using the light source position and intensity. For example, the contribution value of the corresponding light source on each divided block can be calculated by using the inverse square law of physics; the illumination attenuation algorithm may be calculated by other illumination attenuation algorithms or a customized algorithm, which is not particularly limited in this embodiment.
In step S803, a light source to be discarded is selected from the already corresponding light sources according to the contribution value. For example, after obtaining the illumination contribution of the corresponding light source to the partitioned block, a reservation with a larger contribution may be selected, and the light source with the smallest contribution may be used as the light source to be discarded for deletion. And a new light source can be added after the light source with small contribution is deleted, so that the light source in the texture information of the partitioned blocks does not exceed a preset value, artifacts are reduced, and the display effect is improved.
For example, after the light sources corresponding to the partition block are determined, the light source information of each light source is traversed, and the light source information of each light source is written into the texture information, where the format of the texture information may be RGBA. Fig. 9 shows a data structure of texture information, where the texture information stores light source information of multiple light sources, and as shown in fig. 9, if four channels of a pixel correspond to four light sources, and each channel can be 8 bits, 255 light sources can be represented. Among them, lightposaddradius, four floating point numbers (float4), may represent the position information (e.g., x, y, z) of the light source and the radius of the light source; LightColor, which may represent the color of the light source; SpotDir, which can represent the direction of the spotlight; spotlangles, which may represent the inside and outside angles of a spotlight, are used to calculate the illumination attenuation of the spotlight. In the embodiment, the process of determining the texture information of the partition blocks runs on the CPU, and compared with the calculation on the GPU, the requirement on hardware is lower, and debugging and expansion are more convenient.
Next, with continued reference to fig. 1, in step S130, the texture information of each of the divided blocks is used to determine the lighting effect of each of the divided blocks.
After the texture information on each partitioned block is determined, the illumination effect can be calculated according to the texture information when the virtual scene is rendered. For example, the illumination effect may be calculated in the Pixel Shader, first, the UV coordinate corresponding to the texture information may be calculated by using the coordinates of the partition blocks, the sizes of the partition blocks, and the spatial coordinates of the pixels, so as to obtain light source data in the texture information, and the RGBA values of the pixels are calculated by using the light source data, so as to obtain the illumination effect of the pixels. Specifically, the method may include steps S1001 to S1003, as shown in fig. 10, where:
in step S1001, a target light source corresponding to each divided block is acquired from the texture information. Since the texture information stores the index information of each light source, the target light source corresponding to each partition block can be determined from the texture information. In addition, the partition block may not include texture information, whether the texture information includes a light source may be determined by a parameter, if the partition block is not affected by the illumination of any light source, the value of the parameter may be determined to be 0, if the partition block is affected by the light source, the value of the parameter is set to 1, when the target light source is acquired, whether the value of the parameter is 0 may be determined first, and if the value is not 0, the target light source included therein is acquired, so that the amount of computation may be reduced.
In step S1002, a light source type of the target light source is acquired from the light source information of the target light source. The light source information of the target light source may include a light source type of the target light source, such as a point light source, a spotlight, and the like, and different light source types may be illuminated differently, so that the light source type of the target light source is obtained to calculate the illumination effect according to the light source type.
In step S1003, an illumination attenuation algorithm is determined according to the light source type to calculate an illumination effect of the target light source on each of the divided blocks. In this embodiment, a plurality of light source types may be predetermined, for example, a point light source, a spotlight, or other customized types, and then an illumination attenuation algorithm corresponding to each light source type may be determined, where the illumination attenuation algorithm may include a physics theorem or a customized method, for example, the illumination intensity linearly attenuates with distance, and the embodiment is not limited thereto. The RGBA values of the pixels contained in each divided block can be calculated by the illumination attenuation algorithm, thereby determining the illumination effect on the pixel. Specifically, the method for determining the lighting effect may include the following steps S1101 and S1102, as shown in fig. 11, where:
in step S1101, the color value of each divided block is calculated by combining the light attenuation algorithm and the light source information. For example, the illumination attenuation algorithm may be calibration lighting, and then the R value of the color channel may be obtained through calibration lighting (indextdata.r), where indextdata.r is the R value of the light source, and similarly, the GBA value may be obtained in turn, so as to obtain the color value of the partition block.
In step S1102, the divided blocks are rendered based on the color values to obtain the illumination effect of each divided block. For example, Pixel shading may be performed on an object rendering GPU by a Pixel Shader, and a color value of each Pixel may be calculated in the Pixel Shader and then the Pixel is shaded, so as to determine a lighting effect of each partitioned block in the virtual scene. In addition, the virtual scene may be rendered by various rendering tools, for example, 3d max, C4D, and the embodiment is not limited in this respect.
In an exemplary embodiment, the method may include the following steps S1201 to S1206, as shown in fig. 12, specifically:
in step S1201, the virtual scene is divided to obtain divided blocks. In step S1202, a camera bounding box is calculated; taking a virtual reality application as an example, the camera bounding box is a virtual scene in the current view range, and the calculation method is shown in fig. 4. In step S1203, determining a light source in the current view range and a division block corresponding to the light source; and traversing all the light sources, determining whether an intersection exists between the light source range and the camera bounding box, and if so, determining the partition blocks contained in the intersection as corresponding to the light sources. In step S1204, for each divided block, if the number of light sources corresponding to the divided block is greater than a preset value, the light sources are selected for deletion. In step S1205, texture information of the divided blocks is acquired. In step S1206, a lighting effect is calculated from the texture information. It should be understood that the steps shown in fig. 12 have been described in the above embodiment, and therefore are not described herein again.
In the embodiment, the illumination effect of the light source in the visual field range can be determined by segmenting the scene, and the light source outside the visual field range is removed, so that the computing resources can be reduced; moreover, the problem of calculating the scene depth can be avoided, the cost of rendering depth is reduced, and the problem of flickering of delayed calculation is solved. In addition, the process of determining the texture information can be carried out on a CPU, an additional GPU channel is not needed, the requirement on hardware is low, the method can be applied to a mobile terminal, and the application range is wider.
Further, in this exemplary embodiment, a virtual light source processing apparatus is further provided, which is configured to execute the virtual light source processing method of the present disclosure. The device can be applied to a server or terminal equipment.
Referring to fig. 13, the virtual light source processing apparatus 1300 may include: a scene partitioning module 1310, a light source information determining module 1320, and an illumination determining module 1330, wherein:
a scene partitioning module 1310, configured to partition a virtual scene to obtain a plurality of partition blocks of the virtual scene.
A light source information determining module 1320, configured to acquire a plurality of light sources in a current view range, and determine texture information of each of the divided blocks according to light source information of each of the light sources.
The illumination determination module 1330 is configured to determine an illumination effect of each of the divided blocks by using the texture information of each of the divided blocks.
In an exemplary embodiment of the present disclosure, the light source information determining module 1320 may include an illumination range determining unit and a texture determining unit, wherein:
and the illumination range determining unit is used for acquiring the illumination range of each light source from each light source information.
And the texture determining unit is used for determining the light source corresponding to each divided block according to the illumination range from the plurality of light sources so as to determine the texture information of the divided blocks according to the light sources corresponding to the divided blocks.
In an exemplary embodiment of the present disclosure, the texture determining unit may include a view range determining unit, an intersection calculating unit, and a correspondence determining unit, wherein:
and the visual field range determining unit is used for determining the bounding box of the partition blocks contained in the current visual field range.
And the intersection calculation unit is used for calculating the intersection of the bounding box and the illumination range of the light source for each light source.
A correspondence determining unit configured to, if the divided block is included in the intersection, correspond the divided block to the light source.
In an exemplary embodiment of the present disclosure, the apparatus further includes a light source number determining module and a light source rejecting module, wherein:
and the light source quantity determining module is used for determining whether the quantity of the corresponding light sources contained in the texture information of the divided blocks is a preset value.
And the light source eliminating module is used for selecting the light source to be discarded from the corresponding light sources to delete the light source to be discarded if the number of the corresponding light sources is a preset value.
In an exemplary embodiment of the present disclosure, the light source rejecting module may include a light source intensity obtaining unit, a contribution value determining unit, and a light source selecting unit, wherein:
and the light source intensity acquisition unit is used for acquiring the light source position and the intensity of each corresponding light source according to the light source information of each corresponding light source.
And the contribution value determining unit is used for calculating the contribution value of each corresponding light source by using the light source position and the intensity.
And the light source selection unit is used for selecting the light source to be discarded from the corresponding light source according to the contribution value.
In an exemplary embodiment of the disclosure, the scene partitioning module 1310 may be specifically configured to: and dividing the spatial data of the virtual scene in the horizontal direction to obtain the plurality of divided blocks.
In an exemplary embodiment of the present disclosure, the apparatus further includes a light source updating module for updating the plurality of light sources in the current field of view in response to a change in the user field of view.
In an exemplary embodiment of the present disclosure, the illumination determination module 1330 may include a light source indexing unit, a light source type determination unit, and an illumination attenuation calculation unit, wherein:
and the light source index unit is used for acquiring the target light source corresponding to each divided block from the texture information.
And the light source type determining unit is used for acquiring the light source type of the target light source from the light source information of the target light source.
And the illumination attenuation calculating unit is used for determining an illumination attenuation algorithm according to the light source type so as to calculate the illumination effect of the target light source on each partition block.
In an exemplary embodiment of the present disclosure, the illumination determination module 1330 may include a color calculation unit and a rendering unit, wherein:
and the color calculation unit is used for calculating the color value of each divided block by combining the illumination attenuation algorithm and the light source information.
And the rendering unit is used for rendering the partition blocks based on the color values so as to obtain the illumination effect of each partition block.
For details that are not disclosed in the embodiments of the virtual light source processing apparatus of the present disclosure, please refer to the embodiments of the virtual light source processing method of the present disclosure for the details that are not disclosed in the embodiments of the present disclosure.
Referring to fig. 14, fig. 14 is a schematic diagram illustrating a system architecture of an exemplary application environment to which a virtual light source processing method and a virtual light source processing apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 14, the system architecture 1400 may include one or more of end devices 1401, 1402, 1403, a network 1404, and a server 1405. The network 1404 serves to provide a medium for communication links between the terminal devices 1401, 1402, 1403 and the server 1405. The network 1404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 1401, 1402, 1403 may be various electronic devices having a display screen, including but not limited to desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 14 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 1405 may be a server cluster composed of a plurality of servers, or the like.
The virtual light source processing method provided by the embodiment of the present disclosure is generally executed by the server 1405, and accordingly, the virtual light source processing apparatus is generally disposed in the server 1405. However, it is easily understood by those skilled in the art that the virtual light source processing method provided in the present disclosure may also be executed by the terminal devices 1401, 1402, and 1403, and accordingly, the virtual light source processing apparatus may also be disposed in the terminal devices 1401, 1402, and 1403, which is not particularly limited in this exemplary embodiment.
For example, in an exemplary embodiment, the terminal devices 1401, 1402, and 1403 may divide a virtual scene, obtain a plurality of division blocks, determine a light source within a current view range of a user, determine texture information of the division blocks through the light source, render the virtual scene by using the texture information, and render a lighting effect in the virtual scene; therefore, the virtual scene is closer to the real scene, the user experience is more real, and the immersion of the user is improved.
FIG. 15 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 1500 of the electronic device shown in fig. 15 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 15, the computer system 1500 includes a Central Processing Unit (CPU)1501 which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1502 or a program loaded from a storage section 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data necessary for system operation are also stored. The CPU 1501, the ROM 1502, and the RAM 1503 are connected to each other by a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
The following components are connected to the I/O interface 1505: an input portion 1506 including a keyboard, a mouse, and the like; an output portion 1507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. A removable medium 1511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1510 as necessary, so that a computer program read out therefrom is mounted into the storage section 1508 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1509, and/or installed from the removable medium 1511. The computer program executes various functions defined in the method and apparatus of the present application when executed by a Central Processing Unit (CPU) 1501.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1 and 2, and so on.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A virtual light source processing method is characterized by comprising the following steps:
dividing a virtual scene to obtain a plurality of divided blocks of the virtual scene;
acquiring a plurality of light sources in the current view range, and determining texture information of each partition block according to light source information of each light source;
and determining the illumination effect of each divided block by using the texture information of each divided block.
2. The method of claim 1, wherein determining texture information for each of the divided blocks according to light source information for each of the light sources comprises:
acquiring the illumination range of each light source from each light source information;
and determining a light source corresponding to each of the divided blocks according to the illumination range from the plurality of light sources, so as to determine texture information of the divided blocks according to the light sources corresponding to the divided blocks.
3. The method of claim 2, wherein determining the light source corresponding to each of the divided blocks according to the illumination range comprises:
determining a bounding box for dividing blocks contained in the current view range;
for each light source, calculating the intersection of the bounding box and the illumination range of the light source;
and if the division block is contained in the intersection, corresponding the division block with the light source.
4. The method of claim 3, wherein before the associating the divided blocks with the light source, further comprising:
determining whether the number of the corresponding light sources contained in the texture information of the divided blocks is a preset value;
and if the number of the corresponding light sources is a preset value, selecting the light sources to be discarded from the corresponding light sources so as to delete the light sources to be discarded.
5. The method of claim 4, wherein the selecting a light source to discard from the corresponding light sources comprises:
acquiring the light source position and intensity of each corresponding light source according to the light source information of each corresponding light source;
calculating the contribution value of each corresponding light source by using the position and the intensity of the light source;
and selecting a light source to be discarded from the corresponding light sources according to the contribution value.
6. The method of claim 1, wherein the partitioning the scene space to obtain a plurality of partitioned blocks of the scene space comprises:
and dividing the spatial data of the virtual scene in the horizontal direction to obtain the plurality of divided blocks.
7. The method of claim 1, further comprising:
updating the plurality of light sources within the current field of view in response to a change in the user field of view.
8. The method according to claim 1, wherein the determining the lighting effect of each of the divided blocks by using the texture information of each of the divided blocks comprises:
acquiring a target light source corresponding to each divided block from the texture information;
acquiring the light source type of the target light source from the light source information of the target light source;
and determining an illumination attenuation algorithm according to the light source type to calculate the illumination effect of the target light source on each partitioned block.
9. The method according to claim 8, wherein the determining the lighting effect of each of the divided blocks by using the texture information of each of the divided blocks comprises:
calculating color values of the divided blocks by combining the illumination attenuation algorithm and the light source information;
rendering the partition blocks based on the color values to obtain the illumination effect of each partition block.
10. A virtual light source processing apparatus, comprising:
the scene dividing module is used for dividing a virtual scene to obtain a plurality of divided blocks of the virtual scene;
the light source information determining module is used for acquiring a plurality of light sources in the current visual field range and determining the texture information of each partition block according to the light source information of each light source;
and the illumination determining module is used for determining the illumination effect of each partition block by using the texture information of each partition block.
11. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-9 via execution of the executable instructions.
CN202010055325.5A 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment Active CN111260766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010055325.5A CN111260766B (en) 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010055325.5A CN111260766B (en) 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111260766A true CN111260766A (en) 2020-06-09
CN111260766B CN111260766B (en) 2024-03-15

Family

ID=70948930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010055325.5A Active CN111260766B (en) 2020-01-17 2020-01-17 Virtual light source processing method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111260766B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111760277A (en) * 2020-07-06 2020-10-13 网易(杭州)网络有限公司 Illumination rendering method and device
CN111915712A (en) * 2020-08-28 2020-11-10 网易(杭州)网络有限公司 Illumination rendering method and device, computer readable medium and electronic equipment
CN112562051A (en) * 2020-11-30 2021-03-26 腾讯科技(深圳)有限公司 Virtual object display method, device, equipment and storage medium
CN112819938A (en) * 2021-02-09 2021-05-18 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium
CN112882677A (en) * 2021-02-08 2021-06-01 洲磊新能源(深圳)有限公司 Technical method for processing RGB LED multi-color light source
CN113052950A (en) * 2021-03-31 2021-06-29 完美世界(北京)软件科技发展有限公司 Illumination calculation method and device, computer equipment and computer readable storage medium
US20220058851A1 (en) * 2020-08-21 2022-02-24 Nvidia Corporation Grid-based light sampling for ray tracing applications
CN114119853A (en) * 2022-01-26 2022-03-01 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium
CN114399425A (en) * 2021-12-23 2022-04-26 北京字跳网络技术有限公司 Image processing method, video processing method, device, equipment and medium
US11663773B2 (en) 2020-08-21 2023-05-30 Nvidia Corporation Using importance resampling to reduce the memory incoherence of light sampling
WO2024037176A1 (en) * 2022-08-18 2024-02-22 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual scenario, and device and medium
WO2024109006A1 (en) * 2022-11-23 2024-05-30 华为云计算技术有限公司 Light source elimination method and rendering engine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078332A (en) * 1997-01-28 2000-06-20 Silicon Graphics, Inc. Real-time lighting method using 3D texture mapping
CN105335996A (en) * 2014-06-30 2016-02-17 北京畅游天下网络技术有限公司 Light irradiation effect calculation method and device
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game
CN110310356A (en) * 2019-06-26 2019-10-08 北京奇艺世纪科技有限公司 A kind of scene rendering method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078332A (en) * 1997-01-28 2000-06-20 Silicon Graphics, Inc. Real-time lighting method using 3D texture mapping
CN105335996A (en) * 2014-06-30 2016-02-17 北京畅游天下网络技术有限公司 Light irradiation effect calculation method and device
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game
CN110310356A (en) * 2019-06-26 2019-10-08 北京奇艺世纪科技有限公司 A kind of scene rendering method and apparatus

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111760277B (en) * 2020-07-06 2024-05-28 网易(杭州)网络有限公司 Illumination rendering method and device
CN111760277A (en) * 2020-07-06 2020-10-13 网易(杭州)网络有限公司 Illumination rendering method and device
US11663773B2 (en) 2020-08-21 2023-05-30 Nvidia Corporation Using importance resampling to reduce the memory incoherence of light sampling
US20220058851A1 (en) * 2020-08-21 2022-02-24 Nvidia Corporation Grid-based light sampling for ray tracing applications
CN111915712A (en) * 2020-08-28 2020-11-10 网易(杭州)网络有限公司 Illumination rendering method and device, computer readable medium and electronic equipment
CN111915712B (en) * 2020-08-28 2024-05-28 网易(杭州)网络有限公司 Illumination rendering method and device, computer readable medium and electronic equipment
CN112562051A (en) * 2020-11-30 2021-03-26 腾讯科技(深圳)有限公司 Virtual object display method, device, equipment and storage medium
CN112562051B (en) * 2020-11-30 2023-06-27 腾讯科技(深圳)有限公司 Virtual object display method, device, equipment and storage medium
CN112882677A (en) * 2021-02-08 2021-06-01 洲磊新能源(深圳)有限公司 Technical method for processing RGB LED multi-color light source
CN112819938A (en) * 2021-02-09 2021-05-18 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium
CN113052950B (en) * 2021-03-31 2021-12-17 完美世界(北京)软件科技发展有限公司 Illumination calculation method and device, computer equipment and computer readable storage medium
CN113052950A (en) * 2021-03-31 2021-06-29 完美世界(北京)软件科技发展有限公司 Illumination calculation method and device, computer equipment and computer readable storage medium
CN114399425A (en) * 2021-12-23 2022-04-26 北京字跳网络技术有限公司 Image processing method, video processing method, device, equipment and medium
CN114119853A (en) * 2022-01-26 2022-03-01 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium
WO2023142607A1 (en) * 2022-01-26 2023-08-03 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and device and medium
WO2024037176A1 (en) * 2022-08-18 2024-02-22 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual scenario, and device and medium
WO2024109006A1 (en) * 2022-11-23 2024-05-30 华为云计算技术有限公司 Light source elimination method and rendering engine

Also Published As

Publication number Publication date
CN111260766B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN111260766B (en) Virtual light source processing method, device, medium and electronic equipment
JP6504212B2 (en) Device, method and system
US10347042B2 (en) Importance sampling of sparse voxel octrees
KR101639852B1 (en) Pixel value compaction for graphics processing
CN109364481B (en) Method, device, medium and electronic equipment for real-time global illumination in game
KR20220043157A (en) Pixel point identification method and apparatus, lighting rendering method and apparatus, electronic device and storage medium
CN112288873B (en) Rendering method and device, computer readable storage medium and electronic equipment
CN112116692A (en) Model rendering method, device and equipment
WO2021249091A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
CN111210497B (en) Model rendering method and device, computer readable medium and electronic equipment
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
KR20130135309A (en) Data storage address assignment for graphics processing
CN111915712B (en) Illumination rendering method and device, computer readable medium and electronic equipment
CN115509764B (en) Real-time rendering multi-GPU parallel scheduling method and device and memory
CN110930492B (en) Model rendering method, device, computer readable medium and electronic equipment
US20130346041A1 (en) Method for estimating the quantity of light received by a participating media, and corresponding device
CN111870953A (en) Height map generation method, device, equipment and storage medium
CN109377552B (en) Image occlusion calculating method, device, calculating equipment and storage medium
KR102176511B1 (en) Apparatus and Method for processing image
CN111569418B (en) Rendering method, device and medium for content to be output and electronic equipment
CN116012520B (en) Shadow rendering method, shadow rendering device, computer equipment and storage medium
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN108280887B (en) Shadow map determination method and device
CN113936097B (en) Volume cloud rendering method, device and storage medium
CN115588064A (en) Video generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant