WO2020233396A1 - 光照渲染方法和装置、存储介质及电子装置 - Google Patents

光照渲染方法和装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2020233396A1
WO2020233396A1 PCT/CN2020/088629 CN2020088629W WO2020233396A1 WO 2020233396 A1 WO2020233396 A1 WO 2020233396A1 CN 2020088629 W CN2020088629 W CN 2020088629W WO 2020233396 A1 WO2020233396 A1 WO 2020233396A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
light source
source point
point set
virtual light
Prior art date
Application number
PCT/CN2020/088629
Other languages
English (en)
French (fr)
Inventor
魏知晓
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20810793.8A priority Critical patent/EP3971839A4/en
Priority to JP2021536692A priority patent/JP7254405B2/ja
Publication of WO2020233396A1 publication Critical patent/WO2020233396A1/zh
Priority to US17/365,498 priority patent/US11600040B2/en
Priority to US18/164,385 priority patent/US11915364B2/en
Priority to US18/427,479 priority patent/US20240203045A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • This application relates to the computer field, specifically, to a lighting rendering technology.
  • the embodiments of the present application provide a lighting rendering method and device, a storage medium, and an electronic device to at least solve the technical problem of low global illumination rendering efficiency for virtual objects in a virtual three-dimensional scene in the related art.
  • a lighting rendering method including: obtaining a first picture in a virtual three-dimensional scene under a target perspective, where the first picture includes a lighting rendering to be performed in the virtual three-dimensional scene under the target perspective Determine the target virtual light source point set for lighting rendering of the virtual object in the first picture; perform lighting rendering on the virtual object in the first picture according to the target virtual light source point set.
  • an illumination rendering device including: an acquiring unit, configured to acquire a first picture in a target perspective in a virtual three-dimensional scene, wherein the first picture includes a virtual The virtual object to be illuminated in the three-dimensional scene; the determining unit is used to determine the target virtual light source point set for illuminating the virtual object in the first picture; the rendering unit is used to compare the target virtual light source point set in the first picture according to the target virtual light source point set The virtual object performs lighting rendering.
  • the foregoing determining unit includes: a first determining module, configured to determine an original virtual light source point set of each sub-picture in the foregoing first picture, wherein the foregoing first picture includes multiple sub-pictures; A merging module for merging the original virtual light source point sets of each sub-picture to obtain a target virtual light source point set, wherein the target virtual light source point set does not include repeated light source points.
  • the above-mentioned first determining module includes: a first determining sub-module, configured to determine, for each sub-picture in the first picture, the associated pixel of each pixel in the sub-picture to obtain the sub-picture Associated pixel point set;
  • the second determining sub-module is used to determine the first M pixels with the highest occurrence frequency in the associated pixel point set as the original virtual light source point set of the sub-picture, wherein the above M is an integer greater than zero.
  • the first determining submodule is further configured to perform the following steps: each pixel in the sub-picture is determined as the first pixel, and the following operations are performed: In each of the four directions of, down, left, and right, the pixel that is closest to the first pixel and whose depth value is greater than the depth value of the first pixel is determined as the second pixel, and the depth value is the smallest
  • the second pixel of the above-mentioned first pixel is determined as the associated pixel of the above-mentioned first pixel; the associated pixel of each pixel in the above-mentioned sub-picture is combined into the associated pixel set of the above-mentioned sub-picture, wherein the above-mentioned associated pixel set contains repeated Of pixels.
  • the above determining unit includes: a second determining module, configured to determine if the time difference between the time when the first picture is acquired and the time when the J-1th processed picture is acquired is greater than or equal to a first threshold The original virtual light source point set of each sub-picture in the first picture, wherein the first picture includes multiple sub-pictures, the first picture is the J-th picture in the picture set, and the J is an integer greater than 1;
  • the second merging module is configured to merge the original virtual light source point sets of each sub-picture to obtain the target virtual light source point set of the first picture, wherein the target virtual light source point set does not include repeated light source points.
  • the above determining unit further includes: a third determining module, configured to: if the time difference between the time when the first picture is acquired and the time when the J-1th processed picture is acquired is less than a first threshold, the above The target virtual light source point set of the J-1 processed picture is used as the target virtual light source point set of the first picture, where the first picture is the J-th picture in the picture set, and the J is an integer greater than 1.
  • the foregoing rendering unit includes: a first obtaining module, configured to obtain the illumination result of the target virtual light source point set on each pixel in the first picture to obtain the light map of the first picture, Wherein, the illumination value of each pixel in the first picture is recorded in the illumination map; the overlay module is used to superimpose the illumination map and the color map of the first picture to obtain the rendered first picture.
  • the above acquisition module includes: an execution sub-module, configured to use each pixel in the first picture as a third pixel, and perform the following operations until each pixel in the first picture is determined
  • Point illumination result determine the first illumination value of each virtual light source point in the target virtual light source point set to the third pixel point; accumulate the first light value of each virtual light source point in the target virtual light source point set to the third pixel point Illumination value, get the above illumination result.
  • the foregoing rendering unit is configured to obtain the target virtual light source point set of each processed picture in the picture set before the first picture, and according to the target virtual light source point set of each picture in the picture set , Determine the second virtual light source point set of the first picture, wherein the first picture is the last picture in the picture set, and the first virtual light source point set of the processed picture located in the frame before the first picture is used N pixels replace the N pixel points in the second virtual light source point set of the first picture, and the replaced second virtual light source point set is determined as the first virtual light source point set of the first picture;
  • the first virtual light source point set of a picture performs illumination rendering on the virtual object in the first picture.
  • the above processing unit further includes: a second acquisition module, configured to acquire the weight value set for each processed picture in the above picture set; and a third acquisition module, configured to target the first picture
  • the weight of the processed picture including the above-mentioned pixel points in the target virtual light source point set of the processed picture is obtained;
  • the fourth acquisition module is used to obtain the sum of the above-mentioned weights to obtain the above-mentioned first picture The weight sum of each pixel;
  • the fourth determining module is configured to use the K pixels with the maximum weight and the maximum in the first picture as the second virtual light source point set of the first picture, wherein the K is greater than zero The integer.
  • a storage medium in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned illumination rendering method when running.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the above through the computer program Lighting rendering method.
  • the first picture in the target perspective in the virtual three-dimensional scene is acquired, where the first picture includes the virtual object to be illuminated in the virtual three-dimensional scene in the target perspective, and the virtual object in the first picture is determined
  • the target virtual light source point set for the object to perform lighting rendering is the main light source point for rendering the virtual object in the virtual three-dimensional scene, and the method for lighting and rendering the virtual object in the first picture according to the target virtual light source point set.
  • Fig. 1 is a schematic diagram of an application environment of an optional lighting rendering method according to an embodiment of the present application
  • Fig. 2 is a schematic flowchart of an optional lighting rendering method according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of an optional lighting rendering method according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another optional lighting rendering method according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of yet another optional lighting rendering method according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of yet another optional lighting rendering method according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of another optional lighting rendering method according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of yet another optional lighting rendering method according to an embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of an optional lighting rendering device according to an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • a lighting rendering method is provided.
  • the above lighting rendering method may but is not limited to be applied to the environment shown in FIG. 1.
  • the user 102 and the user equipment 104 may perform human-computer interaction.
  • the user equipment 104 includes a memory 106 for storing interactive data, and a processor 108 for processing interactive data.
  • the user equipment 104 can exchange data with the server 112 through the network 110.
  • the server 112 includes a database 114 for storing interactive data, and a processing engine 116 for processing interactive data.
  • the user equipment 104 runs a virtual three-dimensional scene.
  • the user equipment 104 obtains the first picture under the target perspective through steps S102 to S106, and determines the target virtual light source point set of the first picture, and uses the target virtual light source point set to render the virtual objects in the first picture.
  • the first picture is subjected to lighting rendering without global illumination rendering of virtual objects in the virtual three-dimensional scene, which improves the efficiency of lighting rendering of the virtual three-dimensional scene.
  • the above-mentioned illumination rendering method can be, but not limited to, applied to terminals that can calculate data, such as mobile phones, tablets, laptops, PCs, and other terminals.
  • the above-mentioned networks can include, but are not limited to, wireless networks or wired networks.
  • the wireless network includes: wireless local area network (Wireless Fidelity, WIFI) and other networks that implement wireless communication.
  • the aforementioned wired network may include, but is not limited to: wide area network, metropolitan area network, and local area network.
  • the aforementioned server may include, but is not limited to, any hardware device that can perform calculations.
  • the foregoing illumination rendering method includes:
  • S202 Acquire a first picture in the target perspective in the virtual three-dimensional scene, where the first picture includes a virtual object to be illuminated and rendered in the virtual three-dimensional scene in the current perspective;
  • S206 Perform lighting rendering on the virtual object in the first picture according to the target virtual light source point set.
  • the above-mentioned illumination rendering method can be, but not limited to, applied to the process of performing illumination rendering on a three-dimensional virtual scene.
  • the above-mentioned illumination rendering method can be, but not limited to, applied to the process of performing illumination rendering on a three-dimensional virtual scene.
  • performing lighting rendering on a three-dimensional virtual scene of a game performing lighting rendering on a three-dimensional virtual scene of virtual training, or rendering a three-dimensional virtual scene of virtual shopping.
  • the virtual three-dimensional scene includes virtual objects to be rendered by global illumination.
  • the foregoing method requires a large amount of calculation, and the global illumination rendering efficiency of the virtual three-dimensional scene is low.
  • the first picture under the target perspective of the virtual three-dimensional scene is acquired, and the target virtual light source point set for lighting the virtual objects in the first picture is determined, and the target virtual light source point set is used to virtualize the virtual object in the first picture.
  • the object performs lighting rendering, so that only the first picture is lighting rendering, and there is no need to perform real-time rendering of the virtual 3D scene, which reduces the amount of calculation in the rendering process and improves the efficiency of rendering the virtual 3D scene.
  • the first picture when determining the target virtual light source point set, can be split into multiple sub-pictures, and the original virtual light source point set of each sub-picture is determined, by comparing the original virtual light source point set Merging is performed to obtain a target virtual light source point set that does not contain repeated light source points.
  • Fig. 3 is a schematic diagram of an optional splitting of the first picture. As shown in FIG. 3, the first picture 302 is split into multiple sub-pictures 304, where each white rectangular box in FIG. 3 represents a sub-picture.
  • the method of determining the associated pixel point set of the sub-picture may be to separate each pixel in the sub-picture. Determined as the first pixel point, for each first pixel point, the first pixel point in each of the four directions up, down, left, and right, is the closest to the first pixel point and the depth value is greater than the pixel point One pixel of the depth value is determined as the second pixel.
  • the second pixel with the smallest depth value is determined as the associated pixel of the first pixel, and the associated pixel of each pixel in the sub-picture is merged to obtain the associated pixel set of the sub-picture, where the associated pixel is The cluster contains repeated pixels.
  • each pixel 404 in the sub-picture corresponds to four directions of positive x, positive y, negative x, and negative y.
  • the positive x direction is defaulted to the right, and the positive y direction is defaulted to up, so each pixel corresponds to the four directions up, down, left, and right.
  • Determine each pixel in the sub-picture as the first pixel then traverse the pixel that is closest to it in the four directions of the first pixel, such as the adjacent pixel, and determine the The depth value of the pixel point adjacent to the point.
  • FIG. 5 is a schematic diagram of traversing to the second pixel 504 in four directions after traversing the first pixel 502. A value needs to be set during the traversal process. In each of the four directions, after traversing a specified number of pixels, the traversal will stop even if the second pixel is not traversed. If a second pixel is not traversed, the first pixel is determined as the second pixel. After traversing to the second pixel, the second pixel with the smallest depth value is determined as the associated pixel of the first pixel.
  • the associated pixels of each pixel are combined into an associated pixel set, and the associated pixel set contains repeated associated pixels.
  • the first M pixels with the highest occurrence frequency in the associated pixel point set are determined as the original virtual light source point set of the sub-picture.
  • use the target virtual light source point set of the first picture to perform lighting rendering on the virtual object in the first picture.
  • the target virtual light source point of the first picture it is calculated every predetermined time. If the first picture is the J-th picture in the picture set, and J is an integer greater than 1, the time for acquiring the processed picture in the previous frame (for example, the J-1th picture) and the time for acquiring the first picture are determined. If the time difference between the two times is less than the first threshold, the target virtual light source point set of the processed picture in the previous frame of the first picture is determined as the target virtual light source point set of the first picture, so as to reduce the amount of calculation of the system.
  • the target virtual light source point set of the first picture may be used to perform lighting rendering on the virtual object in the first picture.
  • the target virtual light source point set of the first picture may be used to perform lighting rendering on the virtual object in the first picture.
  • the target virtual light source point set of the first picture may be used to perform lighting rendering on the virtual object in the first picture.
  • the target virtual light source point set of the first picture may be used to perform lighting rendering on the virtual object in the first picture.
  • the target virtual light source point set of the first picture may be used to perform lighting rendering on the virtual object in the first picture.
  • each pixel in the first picture is taken as the third pixel, and the following operations are performed until each pixel in the first picture is determined
  • the first illumination value of each virtual light source point in the target virtual light source point set to one pixel point (for example, the third pixel point) can be calculated, and each virtual light source point in the target virtual light source point set to the third pixel
  • the first illumination value of the points is superimposed to obtain the sum of the first illumination value of each target virtual light source point to a pixel in the target virtual light source point set, and the calculated sum is used as the lighting result of the target virtual light source point set to a pixel point .
  • the target virtual light source point in the target virtual light source point set is used to perform illumination rendering on the first picture.
  • the target virtual light source point set of the first picture is acquired, the target virtual light source point set can be processed to obtain the first virtual light source point set, and the first virtual light source point set can be used to compare the first virtual light source point set.
  • the picture is rendered.
  • the foregoing processing procedure is that after the target virtual light source point set of the first picture is obtained, the target virtual light source point set of all processed pictures before the first picture may be obtained.
  • each first picture and multiple rendered pictures before the first picture are determined as a picture set.
  • the specific number of pictures in the picture collection can be set.
  • the first picture is the last picture in the collection.
  • the target virtual light source point set is determined by the above method. Set the weight value for each picture in the picture set. Then, for each pixel point in the target virtual light source point set of the first picture, traverse the target virtual light source point set of each picture in the picture set.
  • the weight of the picture is obtained. After traversing all the target virtual light source point sets in the picture set, all the obtained weights are accumulated to obtain the weight of the pixel point in the target virtual light source point set of the first picture.
  • the weight of each pixel in the first picture can be obtained by the above method.
  • the first K pixels with the largest weight are determined as the second virtual light source point set of the first picture.
  • the point set serves as the first virtual light source point set of the first picture.
  • the first picture rendered by this method is more stable than the method of directly using the target virtual light source point to render the first picture, and the flicker of the picture in the timing is avoided.
  • Figure 6 is an optional game display interface.
  • the game runs with a virtual three-dimensional scene.
  • This solution does not need to perform real-time lighting rendering of virtual objects running in the game in the virtual three-dimensional scene.
  • the pictures are rendered before the pictures in the virtual three-dimensional scene are displayed on the client.
  • the picture to be rendered is called the original picture, that is, the first picture. The description will be made with reference to steps S702 to S722 in FIG. 7.
  • abs(x) Represents the absolute value of x.
  • p2w (x, y, z) indicates that the point with coordinates (x, y) and depth z on a 2 dimensional (2D) image corresponds to a 2 dimensional (3D) position in the scene world.
  • Normalize(x) means finding the normalized vector of vector x.
  • cross(x, y) means finding the cross product vector of vectors x and y.
  • Indirect(px,py,x,y) Represents the indirect light illumination of the scene position corresponding to the 2D image coordinates (px,py) to the scene position corresponding to the 2D image coordinates (x,y).
  • dot(x,y) which means that the dot product calculation is performed on the vector x and the vector y to obtain the dot product result.
  • the color map of the original picture records the color value of each pixel of the original picture
  • the depth map of the original picture records the depth value of each pixel of the original picture.
  • the depth map and color map can be automatically obtained when the system exports the original image, so there is no specific explanation here.
  • the normal map records the normal value of each pixel of the original image.
  • the specific steps for determining the normal map according to the depth map are:
  • D(x, y) is known data.
  • N(x,y) normalize(cross(Right,Top)).
  • c, L, r, u, d, minLr, minud, Mid, Right, Top are defined variables used to explain the calculation process.
  • the normal map of the original picture is obtained by the above method.
  • the normal map and color map of the original picture are available.
  • the depth value of the pixel point depth (px, py) is less than the depth value depth (x, y) of the pixel point (x, y), the pixel point (px, py) is considered to be more than the pixel point (x, y) )
  • the pixel point closer to the observation position, or (px, py) is a point that can produce indirect light on (x, y) at the current observation position (target angle of view), and set (px, py) as the pixel point (x, y)
  • a second pixel in the direction and terminate the traversal in that direction; otherwise, continue to traverse to the next pixel in the given direction, but the number of pixels traversed the most does not exceed a preset number .
  • each sub-picture contains (W/Gw, H/Gh) pixels.
  • W and H are the length and width of the original picture
  • Gw and Gh are integers greater than 1.
  • a second threshold t needs to be added, so that the two positions (u1, v1) and (u2, v2) at the distance of t in 2D are considered to be the same position (u1 ,v1).
  • the circle with (u1, v1) as the origin and t as the radius is set as the associated area of (u1, v1).
  • the statistical number of (u1, v1) is +1 instead of (u2, v2) as a new one Click for statistics.
  • the target virtual light source point set can be used to render the original picture.
  • this embodiment processes the target virtual light source point set to obtain the first virtual light source point set, and uses the first virtual light source point set to render the original picture.
  • the process of determining the first virtual light source point set is as follows:
  • Sf Add Sf to the database Data, and then calculate Sf' based on the historical information of the database and Sf.
  • Sf' is considered to produce a smoother indirect illumination value in the time domain than Sf.
  • h is an integer greater than 1.
  • the first virtual light source point set of the original picture After the first virtual light source point set of the original picture is obtained, the first virtual light source point set is used to perform illumination rendering on the original picture.
  • the first virtual light source point set Sf' is used to indirectly illuminate the current image.
  • the color map, depth map, and normal of the existing image information Find the corresponding color Lc in the picture, the position Lw and the normal line Ln in the virtual three-dimensional scene.
  • the lighting model M1 uses known information to perform indirect lighting calculations to obtain the lighting results pi.
  • a lighting model formula that can be implemented is as follows:
  • pi Lc*dot(Ln,normalize(pw-lw))*dot(pn,normalize(lw-pw))/(c*length(pw-lw)).
  • the rendered comprehensive image is displayed on the client terminal to realize the rendering of the three-dimensional virtual scene of the game.
  • the target virtual light source point may also be aligned with the target virtual light source point along the pixel point.
  • the extension of the point is moved a certain distance, and the attenuation coefficient c of the point light source's irradiance is lowered. In this way, the point light source can be used to simulate the lighting effect of the area light source and improve the light quality.
  • determining the target virtual light source point set for lighting rendering of the first picture includes:
  • S2 Combine the original virtual light source point sets of each sub-picture to obtain a target virtual light source point set, where the target virtual light source point set does not include repeated light source points.
  • the first picture is split into multiple sub-pictures, and the original virtual light source point set of each sub-picture is calculated.
  • the original picture 302 is split into multiple sub-pictures 304, and the original virtual light source point set of each sub-picture 304 is calculated, and the original virtual light source point set of each sub-picture is combined as the target virtual light source point. set.
  • the first picture is split into multiple sub-pictures, and then the original virtual light source point set of each sub-picture is calculated and merged into the target virtual light source point set, so that the target virtual light source point of the target virtual light source point set is It is relatively scattered and will not be concentrated in a small area of the original picture, thereby improving the accuracy of determining the target virtual light source point.
  • determining the original virtual light source point set of each sub-picture in the first picture includes:
  • S2 Determine the first M pixel points with the highest occurrence frequency in the associated pixel point set as the original virtual light source point set of the sub-picture, where M is an integer greater than zero.
  • the associated pixel of the current pixel when determining the associated pixel of each pixel, if the depth value of the current pixel is greater than a limit value, the associated pixel of the current pixel needs to be determined as (0, 0), if it is the current pixel If the depth value of the point is within the limit value or equal to the limit value, it can traverse the pixels of the current pixel in the four directions of up, down, left, and right to determine the associated pixel of the current pixel.
  • the pixels with the highest occurrence frequency among the associated pixel points of the sub-picture pixels are determined as the original virtual light source point set, thereby reducing the need for calculation
  • the large number of light source points causes a large amount of calculation.
  • the accuracy of determining the target virtual light source point is also ensured.
  • the associated pixel of each pixel in the sub-picture is determined to obtain the associated pixel of the sub-picture
  • the set includes:
  • each pixel in the sub-picture is determined as the first pixel, and the following operations are performed: the distance between the first pixel and the first pixel in each of the four directions up, down, left, and right The closest pixel with a depth value greater than the depth value of the pixel is determined as the second pixel, and the second pixel with the smallest depth value is determined as the associated pixel of the first pixel;
  • S2 Combine the associated pixel points of each pixel in each sub-picture to obtain an associated pixel point set of the sub-picture, wherein the associated pixel point set includes repeated pixels.
  • the number of second pixels of one pixel in this embodiment may be zero to four. If the number of the second pixel is zero, the associated pixel of the pixel is set to (0, 0).
  • the associated pixels of each pixel are traversed by the above method, so that the associated pixels that meet the conditions can be traversed in a short time, without the need to traverse all the pixels, which improves the acquisition of each pixel.
  • the efficiency of the associated pixel point of the point is the efficiency of the associated pixel point of the point.
  • determining the target virtual light source point set for lighting rendering of the first picture includes:
  • S2 Combine the original virtual light source point sets of each sub-picture to obtain a target virtual light source point set, where the target virtual light source point set does not include repeated light source points.
  • a first threshold is preset in this embodiment.
  • the first threshold is used to control whether to calculate the target virtual light source point of the first picture. For example, taking 10 pictures to be rendered, the interval between each picture is 0.04 seconds, and the first threshold is 0.2 seconds as an example.
  • the first picture in the 10 pictures calculates the target virtual light source point, then the first picture
  • the target virtual light source point of a picture is assigned to the first to fifth pictures.
  • calculate the target virtual light source point of the sixth picture and assign the target virtual light source point of the sixth picture to the sixth to tenth pictures. That is, it is calculated every 0.2 seconds, not every picture needs to be calculated.
  • the target virtual light source point is calculated every first threshold value, thereby reducing the frequency of calculating the target virtual light source point and improving the efficiency of determining the target virtual light source point.
  • determining the target virtual light source point set for lighting rendering of the first picture further includes:
  • the target virtual light source point of the previous picture is directly assigned to the first picture. A picture, thereby improving the efficiency of determining the target virtual light source point.
  • performing illumination rendering on the first picture according to the target virtual light source point set includes:
  • S1 Obtain the lighting result of each pixel in the first picture by the target virtual light source point set, and obtain the lighting map of the first picture, where the lighting value of each pixel in the first picture is recorded in the lighting map;
  • the target virtual light source point set is used to determine the lighting result of each pixel in the first picture, thereby obtaining the light map of the first picture, and combining the light map and the color map To determine the first picture after rendering.
  • the light map after adjusting the transparency can be superimposed on the color map, or the light map can be directly superimposed on the color map.
  • the first picture can be directly processed to achieve the purpose of lighting rendering of virtual objects in the virtual three-dimensional scene, which improves the efficiency of lighting rendering.
  • obtaining the illumination result of each pixel in the first picture by the target virtual light source point set includes:
  • S3 Accumulate the first illumination value of each virtual light source point in the target virtual light source point set to the third pixel point to obtain an illumination result.
  • the first illumination value of each target virtual light source point and one pixel in the target virtual light source point set is determined, and the first illumination value is superimposed to obtain the illumination result, thereby improving the efficiency of obtaining the illumination map of the first picture. effectiveness.
  • the method of performing illumination rendering on the virtual object in the first picture according to the target virtual light source point set may be: S1, obtaining the information of each processed picture in the picture set before the first picture
  • the target virtual light source point set according to the target virtual light source point set of each picture in the picture set, determine the second virtual light source point set of the first picture, where the first picture is the last picture in the picture set, and the first picture is used
  • the N pixels in the first virtual light source point set of the processed picture of the previous frame replace the N pixels in the second virtual light source point set of the first picture, and the replaced second virtual light source point set is determined as the first
  • the first virtual light source point set of the picture; the first virtual light source point set of the first picture is used to perform lighting rendering on the virtual object in the first picture.
  • the target virtual light source point set is not used to process the first picture, but is calculated based on the target virtual light source point set of each picture in the picture set The first virtual light source point set of the first picture, and the first virtual light source point set obtained by calculation is used to process the first picture.
  • the first virtual light source point set of the first picture is determined according to the target virtual light source point set of each picture in the picture set, and the first picture is illuminated and rendered according to the first virtual light source point set. Avoid flickering during rendering.
  • determining the second virtual light source point set of the first picture according to the target virtual light source point set of each picture in the picture set includes:
  • S4 Use the target virtual light source point set of the first picture, the weight, and the largest K pixels as the second virtual light source point set of the first picture, where K is an integer greater than zero.
  • the processing order is processed picture 1, processed picture 2, and processed picture 3.
  • the weights are 0.1, 0.3, and 0.6 respectively.
  • For a pixel point in the target virtual light source point set of the first picture traverse the target virtual light source point set of each picture in the processed pictures 1-3. If it is traversed to the pixel point, such as in the target virtual light source point set of the processed picture 1 If the pixel point is found in the light source point concentration, the weight 0.1 of the processed picture 1 is obtained. If the pixel point is found in the target virtual light source point concentration of the processed picture 3, the weight 0.6 of the processed picture 3 is obtained, and the weight sum is calculated Is 0.7.
  • the weight sum of each pixel point in the target virtual light source point of the first picture is determined and sorted, and several pixels with the largest weight sum are selected as the second virtual light source point set.
  • the first virtual light source point set of the first picture is determined according to the first virtual light source point set of the previous picture and the second virtual light source point set of the first picture.
  • the first virtual light source point set is determined by the above method, thereby ensuring the accuracy of determining the first virtual light source point set.
  • the device includes:
  • the acquiring unit 902 is configured to acquire a first picture in a virtual three-dimensional scene under a target perspective, where the first picture includes a virtual object to be illuminated and rendered in the virtual three-dimensional scene under the target perspective;
  • the determining unit 904 is configured to determine a target virtual light source point set for performing illumination rendering on the virtual object in the first picture
  • the rendering unit 906 is configured to perform lighting rendering on the virtual object in the first picture according to the target virtual light source point set.
  • the above-mentioned illumination rendering method can be, but not limited to, applied to the process of performing illumination rendering on a three-dimensional virtual scene.
  • the above-mentioned illumination rendering method can be, but not limited to, applied to the process of performing illumination rendering on a three-dimensional virtual scene.
  • performing lighting rendering on a three-dimensional virtual scene of a game performing lighting rendering on a three-dimensional virtual scene of virtual training, or rendering a three-dimensional virtual scene of virtual shopping.
  • the virtual three-dimensional scene includes virtual objects to be rendered by global illumination.
  • the foregoing method requires a large amount of calculation, and the global illumination rendering efficiency of the virtual three-dimensional scene is low.
  • the first picture under the target perspective of the virtual three-dimensional scene is acquired, and the target virtual light source point set for lighting the virtual objects in the first picture is determined, and the target virtual light source point set is used to virtualize the virtual object in the first picture.
  • the object performs lighting rendering, so that only the first picture is lighting rendering, and there is no need to perform real-time rendering of the virtual 3D scene, which reduces the amount of calculation in the rendering process and improves the efficiency of rendering the virtual 3D scene.
  • the above determining unit includes:
  • the first determining module is used to determine the original virtual light source point set of each sub-picture in the first picture, wherein the first picture includes multiple sub-pictures;
  • the first merging module is used to merge the original virtual light source point sets of each sub-picture to obtain a target virtual light source point set, wherein the target virtual light source point set does not include repeated light source points.
  • the original picture is split into multiple sub pictures, and then the original virtual light source point set of each sub picture is calculated and merged into a target virtual light source point set, so that the target virtual light source point of the target virtual light source point set is relatively Scattered, not concentrated in a small area of the original picture, thereby improving the accuracy of determining the target virtual light source point.
  • the above-mentioned first determining module includes:
  • the first determining sub-module is used to determine the associated pixel point of each pixel in the sub-picture for each sub-picture in the first picture, and obtain the associated pixel point set of the sub-picture;
  • the second determining sub-module is configured to determine the first M pixels with the highest occurrence frequency in the associated pixel point set as the original virtual light source point set of the sub-picture, where M is an integer greater than zero.
  • the pixels with the highest occurrence frequency among the associated pixel points of the sub-picture pixels are determined as the original virtual light source point set, thereby reducing the need for calculation
  • the large number of light source points causes a large amount of calculation.
  • the accuracy of determining the target virtual light source point is also ensured.
  • the above-mentioned first determining submodule is further configured to perform the following steps:
  • each pixel in the sub-picture as the first pixel, and perform the following operations: respectively connect the first pixel in each of the four directions up, down, left, and right to the first pixel A pixel with the closest point distance and a depth value greater than the depth value of the first pixel is determined as the second pixel, and the second pixel with the smallest depth value is determined as the associated pixel of the first pixel;
  • the associated pixels of each pixel are traversed by the above method, so that the associated pixels that meet the conditions can be traversed in a short time, without the need to traverse all the pixels, which improves the acquisition of each pixel.
  • the efficiency of the associated pixel point of the point is the efficiency of the associated pixel point of the point.
  • the above determining unit includes:
  • the second determining module is configured to determine the original virtual image of each sub-picture in the first picture if the time difference between the time when the first picture is acquired and the time when the J-1th processed picture is acquired is greater than or equal to the first threshold
  • the second merging module is used to merge the original virtual light source point sets of each sub-picture to obtain the target virtual light source point set of the first picture, wherein the target virtual light source point set does not include repeated light source points.
  • the target virtual light source point is calculated every first threshold value, thereby reducing the frequency of calculating the target virtual light source point and improving the efficiency of determining the target virtual light source point.
  • the above determining unit further includes:
  • the third determining module is used to set the target virtual light source point of the J-1th processed picture if the time difference between the time when the first picture is acquired and the time when the J-1th processed picture is acquired is less than the first threshold Set as the target virtual light source point set of the first picture, where the first picture is the J-th picture in the picture set, and J is an integer greater than 1.
  • the target virtual light source point of the previous picture is directly assigned to the first picture. A picture, thereby improving the efficiency of determining the target virtual light source point.
  • the foregoing rendering unit includes:
  • the first acquisition module is used to acquire the lighting result of each pixel in the first picture by the target virtual light source point set, and obtain the lighting map of the first picture, where each of the first pictures is recorded in the lighting map Illumination value of pixel;
  • the superimposition module is used to superimpose the light map and the color map of the first picture to obtain the rendered first picture.
  • the first picture can be directly processed to achieve the purpose of lighting rendering of virtual objects in the virtual three-dimensional scene, which improves the efficiency of lighting rendering.
  • the above-mentioned acquisition module includes:
  • the execution sub-module is used to take each pixel in the first picture as the third pixel and perform the following operations until the illumination result of each pixel in the first picture is determined:
  • the first illumination value of each target virtual light source point and one pixel in the target virtual light source point set is determined, and the first illumination value is superimposed to obtain the illumination result, thereby improving the efficiency of obtaining the illumination map of the first picture. effectiveness.
  • the above-mentioned rendering unit is used to obtain the target virtual light source point set of each processed picture before the first picture in the picture set, and according to the target virtual light source point set of each picture in the picture set, Determine the second virtual light source point set of the first picture, where the first picture is the last picture in the picture set, and replace it with N pixels in the first virtual light source point set of the processed picture in the previous frame of the first picture Determine the replaced second virtual light source point set as the first virtual light source point set of the first picture for N pixels in the second virtual light source point set of the first picture; use the first virtual light source point set of the first picture Perform lighting rendering on the virtual object in the first picture.
  • the first virtual light source point set of the first picture is determined according to the target virtual light source point set of each picture in the picture set, and the first picture is illuminated and rendered according to the first virtual light source point set. Avoid flickering during rendering.
  • the above processing unit further includes:
  • the second acquisition module is used to acquire the weight value set for each processed picture in the picture set
  • the third acquisition module is configured to acquire the weight of the processed picture including the pixel points in the target virtual light source point set of the processed picture for each pixel in the target virtual light source point set of the first picture;
  • the fourth obtaining module is used to obtain the sum of weights to obtain the sum of weights of each pixel in the first picture
  • the fourth determining module is configured to use the K pixels with the maximum weight and the maximum in the first picture as the second virtual light source point set of the first picture, where K is an integer greater than zero.
  • the first virtual light source point set is determined by the above method, thereby ensuring the accuracy of determining the first virtual light source point set.
  • the electronic device for implementing the above illumination rendering method.
  • the electronic device includes a memory 1002 and a processor 1004.
  • the memory 1002 stores a computer
  • the processor 1004 is configured to execute the steps in any one of the foregoing method embodiments through a computer program.
  • the above electronic device may be located in at least one network device among multiple network devices in the computer network.
  • the foregoing processor may be configured to execute the following steps through a computer program:
  • S1 Acquire a first picture in the virtual three-dimensional scene under the target perspective, where the first picture includes a virtual object to be illuminated and rendered in the virtual three-dimensional scene under the target perspective;
  • S3 Perform lighting rendering on the virtual object in the first picture according to the target virtual light source point set.
  • the structure shown in FIG. 10 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal devices.
  • FIG. 10 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components (such as a network interface, etc.) than shown in FIG. 10, or have a configuration different from that shown in FIG.
  • the memory 1002 can be used to store software programs and modules, such as program instructions/modules corresponding to the lighting rendering method and device in the embodiments of the present application.
  • the processor 1004 executes the software programs and modules stored in the memory 1002 by running This kind of functional application and data processing realize the above-mentioned lighting rendering method.
  • the memory 1002 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 1002 may further include a memory remotely provided with respect to the processor 1004, and these remote memories may be connected to the terminal through a network.
  • the memory 1002 may be specifically, but not limited to, storing information such as the first picture and the target virtual light source point set.
  • the foregoing memory 1002 may include, but is not limited to, the acquiring unit 902, the determining unit 904, and the rendering unit 906 in the foregoing illumination rendering device.
  • it may also include, but is not limited to, other module units in the above-mentioned illumination rendering device, which will not be repeated in this example.
  • the aforementioned transmission device 1006 is used to receive or send data via a network.
  • the above-mentioned specific examples of networks may include wired networks and wireless networks.
  • the transmission device 1006 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers via a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 1006 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the above-mentioned electronic device further includes: a display 1008 for displaying the first picture after lighting rendering; and a connection bus 1010 for connecting each module component in the above-mentioned electronic device.
  • a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any of the foregoing method embodiments when running.
  • the foregoing storage medium may be configured to store a computer program for executing the following steps:
  • S1 Acquire a first picture in the virtual three-dimensional scene under the target perspective, where the first picture includes a virtual object to be illuminated and rendered in the virtual three-dimensional scene under the target perspective;
  • S3 Perform lighting rendering on the virtual object in the first picture according to the target virtual light source point set.
  • the storage medium may include a flash disk, a read-only memory (Read-Only Memory, ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
  • ROM read-only memory
  • RAM random access device
  • magnetic disk or an optical disk etc.
  • the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the foregoing computer-readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, A number of instructions are included to make one or more computer devices (which may be personal computers, servers, or network devices, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种光照渲染方法和装置、存储介质及电子装置。其中,该方法包括:获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象;确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。本申请解决了相关技术中对虚拟三维场景中的虚拟对象的全局光照渲染效率低的技术问题。

Description

光照渲染方法和装置、存储介质及电子装置
本申请要求于2019年5月17日提交中国专利局、申请号201910413385.7、申请名称为“光照渲染方法和装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种光照渲染技术。
背景技术
相关技术中,在显示虚拟三维场景中的虚拟对象之前,通常需要对虚拟三维场景中的虚拟对象进行实时的全局光照渲染,然后显示渲染后的虚拟场景中的对象。
然而,由于实时的渲染三维场景中的虚拟对象需要执行复杂的步骤,消耗大量的资源,因此,相关技术中提出的全局光照渲染的方法渲染效率低。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种光照渲染方法和装置、存储介质及电子装置,以至少解决相关技术中对虚拟三维场景中的虚拟对象的全局光照渲染效率低的技术问题。
根据本申请实施例的一个方面,提供了一种光照渲染方法,包括:获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象;确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
根据本申请实施例的另一方面,还提供了一种光照渲染装置,包括:获取单元,用于获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象;确定单元,用于确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;渲染单元,用于根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
作为一种可选的示例,上述确定单元包括:第一确定模块,用于确定上述第一图片中每一个子图片的原始虚拟光源点集,其中,上述第一图片中包括多个子图片;第一合并模块,用于将上述每一个子图片的上述原始虚拟光源点集进行合并得到目标虚拟光源点集,其中,上述目标虚拟光源点集中不包含重复的光源点。
作为一种可选的示例,上述第一确定模块包括:第一确定子模块,用于针对第一图片中每一个子图片,确定子图片中每一个像素点的关联像素点,得到子图片的关联像素点集; 第二确定子模块,用于将上述关联像素点集中出现频率最高的前M个像素点确定为子图片的上述原始虚拟光源点集,其中,上述M为大于零的整数。
作为一种可选的示例,上述第一确定子模块还用于执行以下步骤:将上述子图片中每一个像素点分别确定为第一像素点,执行以下操作:分别将上述第一像素点上、下、左、右四个方向的每个方向中,与上述第一像素点距离最近的且深度值大于上述第一像素点深度值的一个像素点确定为第二像素点,将深度值最小的第二像素点确定为上述第一像素点的关联像素点;将上述子图片中每一个像素点的关联像素点合并为上述子图片的关联像素点集,其中,上述关联像素点集中包含重复的像素点。
作为一种可选的示例,上述确定单元包括:第二确定模块,用于若获取到上述第一图片的时间与获取到第J-1张已处理图片的时间差大于或者等于第一阈值,确定上述第一图片中每一个子图片的原始虚拟光源点集,其中,上述第一图片中包括多个子图片,上述第一图片为图片集中的第J张图片,上述J为大于1的整数;第二合并模块,用于将上述每一个子图片的上述原始虚拟光源点集进行合并得到上述第一图片的上述目标虚拟光源点集,其中,上述目标虚拟光源点集中不包含重复的光源点。
作为一种可选的示例,上述确定单元还包括:第三确定模块,用于若获取到上述第一图片的时间与获取到第J-1张已处理图片的时间差小于第一阈值,将上述第J-1张已处理图片的目标虚拟光源点集作为上述第一图片的目标虚拟光源点集,其中,上述第一图片为图片集中的第J张图片,上述J为大于1的整数。
作为一种可选的示例,上述渲染单元包括:第一获取模块,用于获取上述目标虚拟光源点集对上述第一图片中每一个像素点的光照结果,得到上述第一图片的光照图,其中,上述光照图中记录有上述第一图片中每一个像素点的光照值;叠加模块,用于将上述光照图与上述第一图片的颜色图叠加,得到渲染后的上述第一图片。
作为一种可选的示例,上述获取模块包括:执行子模块,用于将上述第一图片中每一个像素点作为第三像素点,执行以下操作,直到确定出上述第一图片中每一个像素点的光照结果:确定上述目标虚拟光源点集中每一个虚拟光源点对上述第三像素点的第一光照值;累加上述目标虚拟光源点集中每一个虚拟光源点对上述第三像素点的第一光照值,得到上述光照结果。
作为一种可选的示例,上述渲染单元用于获取图片集中位于上述第一图片之前的每一张已处理图片的目标虚拟光源点集,根据上述图片集中每一张图片的目标虚拟光源点集,确定上述第一图片的第二虚拟光源点集,其中,上述第一图片为上述图片集中最后一张图片,使用位于上述第一图片前一帧的已处理图片的第一虚拟光源点集中的N个像素点替换上述第一图片的第二虚拟光源点集中的N个像素点,将替换后的上述第二虚拟光源点集确定为上述第一图片的第一虚拟光源点集;使用上述第一图片的上述第一虚拟光源点集对上述第一图片中虚拟对象进行光照渲染。
作为一种可选的示例,上述处理单元还包括:第二获取模块,用于获取为上述图片集中每一张已处理图片设置的权重值;第三获取模块,用于针对第一图片的目标虚拟光源点集中的每一个像素点,获取已处理图片的目标虚拟光源点集中包括上述像素点的已处理图片的权重;第四获取模块,用于获取上述权重的和,得到上述第一图片中每一个像素点的权重和;第四确定模块,用于将上述第一图片中,权重和最大的K个像素点作为上述第一图片的第二虚拟光源点集,其中,上述K为大于零的整数。
根据本申请实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述光照渲染方法。
根据本申请实施例的又一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的光照渲染方法。
在本申请实施例中,采用获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象,确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集,目标虚拟光源点集为虚拟三维场景下对虚拟对象进行渲染的主要光源点,根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染的方法。在上述方法中,由于直接获取到虚拟三维场景中目标视角下的第一图片,以及根据第一图片的目标虚拟光源点集对第一图片进行光照渲染,从而不需要在虚拟三维场景中对虚拟对象进行渲染,快速获取对虚拟对象进行渲染的主要光源点,提高了对虚拟三维场景中的虚拟对象进行渲染的效率,进而解决了相关技术中对虚拟三维场景中的虚拟对象的全局光照渲染效率低的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种可选的光照渲染方法的应用环境的示意图;
图2是根据本申请实施例的一种可选的光照渲染方法的流程示意图;
图3是根据本申请实施例的一种可选的光照渲染方法的示意图;
图4是根据本申请实施例的另一种可选的光照渲染方法的示意图;
图5是根据本申请实施例的又一种可选的光照渲染方法的示意图;
图6是根据本申请实施例的又一种可选的光照渲染方法的示意图;
图7是根据本申请实施例的另一种可选的光照渲染方法的流程示意图;
图8是根据本申请实施例的又一种可选的光照渲染方法的示意图;
图9是根据本申请实施例的一种可选的光照渲染装置的结构示意图;
图10是根据本申请实施例的一种可选的电子装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种光照渲染方法,可选地,作为一种可选的实施方式,上述光照渲染方法可以但不限于应用于如图1所示的环境中。
图1中用户102与用户设备104之间可以进行人机交互。用户设备104中包含有存储器106,用于存储交互数据、处理器108,用于处理交互数据。用户设备104可以通过网络110与服务器112之间进行数据交互。服务器112中包含有数据库114,用于存储交互数据、处理引擎116,用于处理交互数据。用户设备104中运行有虚拟三维场景。用户设备104通过步骤S102到步骤S106获取目标视角下的第一图片,并通过确定第一图片的目标虚拟光源点集,使用目标虚拟光源点集对第一图片中虚拟对象进行渲染的方法,对第一图片进行光照渲染,而不需要对虚拟三维场景中的虚拟对象进行全局光照渲染,提高了对虚拟三维场景进行光照渲染的效率。
可选地,上述光照渲染方法可以但不限于应用于可以计算数据的终端上,例如手机、平板电脑、笔记本电脑、PC机等终端上,上述网络可以包括但不限于无线网络或有线网络。其中,该无线网络包括:无线局域网(Wireless Fidelity,WIFI)及其他实现无线通信的网络。上述有线网络可以包括但不限于:广域网、城域网、局域网。上述服务器可以包括但不限于任何可以进行计算的硬件设备。
作为一种可选的实施方式,如图2所示,上述光照渲染方法包括:
S202,获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括当前视角下虚拟三维场景中待进行光照渲染的虚拟对象;
S204,确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;
S206,根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
可选地,上述光照渲染方法可以但不限于应用于对三维虚拟场景进行光照渲染的过程中。例如,对游戏的三维虚拟场景进行光照渲染或者对虚拟训练的三维虚拟场景进行光照渲染或者对虚拟购物的三维虚拟场景进行渲染的过程中。
以对游戏的虚拟三维场景进行光照渲染为例进行说明。在游戏的过程中,游戏运行有虚拟三维场景。虚拟三维场景中包括有待进行全局光照渲染的虚拟对象。相关技术中通常需要根据虚拟三维场景中的虚拟光源对虚拟三维场景中的虚拟对象的光照影响进行实时的计算,得到计算结果,并对虚拟对象进行全局光照渲染。然而,上述方法计算量大,对虚拟三维场景的全局光照渲染效率低。本实施例中,获取虚拟三维场景的目标视角下的第一图片,并确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集,以及使用目标虚拟光源点集对第一图片中虚拟对象进行光照渲染,从而只对第一图片进行光照渲染,不需要再对虚拟三维场景进行实时的渲染,降低了渲染过程的计算量,提高了对虚拟三维场景进行渲染的效率。
可选地,本实施例中,在确定目标虚拟光源点集时,可以将第一图片拆分成多个子图片,并确定每一个子图片的原始虚拟光源点集,通过对原始虚拟光源点集进行合并,得到不包含重复光源点的目标虚拟光源点集。
图3为一种可选的对第一图片进行拆分的示意图。如图3所示,将第一图片302拆分成多个子图片304,其中,图3中每个白色矩形框表示一个子图片。
可选地,在确定每一个子图片的原始虚拟光源点集时,需要确定每一个子图片中的每一个像素点的关联像素点,并确定原始虚拟光源点集。
在本申请实施例中可以采用多种方法确定子图片的关联像素点集,在一种可能的实现方式中,确定子图片的关联像素点集的方式可以是将子图片中每一个像素点分别确定为第一像素点,针对每个第一像素点,分别将第一像素点上、下、左、右四个方向的每个方向中,与第一像素点距离最近且深度值大于像素点深度值的一个像素点确定为第二像素点。然后,将深度值最小的第二像素点确定为第一像素点的关联像素点,将子图片中每一个像素点的关联像素点进行合并得到子图片的关联像素点集,其中,关联像素点集中包含重复的像素点。
以一个子图片为例,如图4所示,将子图片402放置于平面直角坐标系中,子图片中的每一个像素点404都对应正x、正y、负x、负y四个方向。将正x方向默认为右,正y方向默认为上,则每一个像素点都对应上下左右四个方向。将子图片中的每一个像素点确定为第一像素点,则遍历第一像素点的上下左右四个方向上与之距离最近的像素点,例如相邻的像素点,并确定与第一像素点相邻的像素点的深度值。若是相邻的像素点的深度值小于了第一像素点的深度值,则停止该方向上的遍历,若是相邻像素点的深度值大于了第一像素点的深度值,则继续该方向上的遍历,直到在每一个方向上都遍历到一个深度值小 于第一像素点的深度值的像素点。如图5所示,图5为对第一像素点502遍历后,在四个方向上遍历到第二像素点504的示意图。在遍历的过程中需要设定一个值,在四个方向上的每一个方向中,遍历规定数量的像素点后,即使未遍历到第二像素点也会停止遍历。若是一个第二像素点都没有遍历到,则将第一像素点确定为第二像素点。在遍历到第二像素点之后,将深度值最小的第二像素点确定为第一像素点的关联像素点。
可选地,在确定了一个子图片中每一个像素点的关联像素点之后,将每一个像素点的关联像素点合并为关联像素点集,关联像素点集中包含有重复的关联像素点。将关联像素点集中出现频率最高的前M个像素点确定为子图片的原始虚拟光源点集。在获取到每一个子图片的原始虚拟光源点集后,将每一个子图片的原始虚拟光源点集并为一个不包含重复光源点的光源点集,将合并的光源点集确定为第一图片的目标虚拟光源点集,并使用第一图片的目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
可选地,本实施例中在计算第一图片的目标虚拟光源点的过程中,是每隔预定时间计算一次。若第一图片为图片集中的第J张图片,J为大于1的整数,确定获取第一图片的前一帧(例如第J-1张)已处理图片的时间与获取第一图片的时间。若是两个时间的时间差小于了第一阈值,则将第一图片前一帧的已处理图片的目标虚拟光源点集确定为第一图片的目标虚拟光源点集,以减少系统的运算量。
若是两个时间的时间差大于或者等于第一阈值,确定第一图片中每一个子图片的原始虚拟光源点集,将每一个子图片的原始虚拟光源点集进行合并得到不包含重复光源点的目标虚拟光源点集。
可选地,在计算得到第一图片的目标虚拟光源点集后,可以使用第一图片的目标虚拟光源点集对第一图片中的虚拟对象进行光照渲染。在渲染时,对于第一图片中的每一个像素点,计算目标虚拟光源点集对该像素点的光照结果,在计算得到目标虚拟光源点集对第一图片中每一个像素点的光照结果后,得到记录有每一个像素点光照值的光照图。在获取到第一图片的光照图后,将该光照图叠加到第一图片的颜色图上,得到一个混合图片,混合图片为对第一图片进行光照渲染后的图片。
可选地,在计算目标虚拟光源点集对一个像素点的光照结果时,将第一图片中每一个像素点作为第三像素点,执行以下操作,直到确定出第一图片中每一个像素点的光照结果,例如可以计算目标虚拟光源点集中每一个虚拟光源点对一个像素点(例如第三像素点)的第一光照值,并将目标虚拟光源点集中每一个虚拟光源点对第三像素点的第一光照值叠加,得到目标虚拟光源点集中每一个目标虚拟光源点对一个像素点的第一光照值的和,将计算得到的和作为目标虚拟光源点集对一个像素点的光照结果。
可选地,上述方法中是使用目标虚拟光源点集中的目标虚拟光源点对第一图片进行光照渲染。作为另一种方式,还可以在获取到第一图片的目标虚拟光源点集后,对目标虚拟光源点集进行处理,得到第一虚拟光源点集,并使用第一虚拟光源点集对第一图片进行渲染。
可选地,上述处理过程为,在获取到第一图片的目标虚拟光源点集后,可以获取第一图片之前的所有已处理图片的目标虚拟光源点集。如,在渲染过程中,将每一张第一图片与该第一图片之前的多张已渲染图片确定为一个图片集。图片集的具体图片数量可以进行设定。第一图片为图片集最后一张图片。图片集中每一张图片都通过上述方法确定出了目标虚拟光源点集。为图片集中每一张图片设定权重值。然后针对第一图片的目标虚拟光源点集中每一个像素点,遍历图片集中每一个图片的目标虚拟光源点集。若是该像素点位于图片集中一张图片的目标虚拟光源点集中,则获取该图片的权重。在遍历图片集中所有的目标虚拟光源点集后,将所有获取的权重累加,得到第一图片的目标虚拟光源点集中该像素点的权重。通过上述方法可以获取到第一图片中每一个像素点的权重。将其中权重最大的前K个像素点确定为第一图片的第二虚拟光源点集。使用第一图片前的一帧的已处理图片的第一虚拟光源点集中的N个像素点替换掉第一图片的第二虚拟光源点集中的N个像素点,将替换后的第二虚拟光源点集作为第一图片的第一虚拟光源点集。并使用第一虚拟光源点集渲染第一图片。该方法渲染的第一图片比直接使用目标虚拟光源点渲染第一图片的方法更稳定,避免了图片在时序上的闪烁。
以下结合游戏场景,对上述光照渲染方法进行说明。如图6所示,图6为一个可选的游戏的显示界面。游戏运行有虚拟三维场景。本方案不需要在虚拟三维场景中对游戏运行的的虚拟对象进行实时光照渲染,而是在客户端上显示虚拟三维场景中的图片之前,对图片进行渲染。以下将待进行渲染的图片称为原始图片,也即第一图片。结合图7的步骤S702到步骤S722进行说明。
预先定义如下公式:
1、abs(x):表示x的绝对值。
2、p2w(x,y,z):表示2维(2dimensional,2D)图像上坐标为(x,y),且深度为z的点对应于场景世界中的2维(2dimensional,3D)位置。
3、normalize(x):表示求向量x的归一化向量。
4、cross(x,y):表示求向量x和y的叉乘向量。
5、depth(x,y):表示2D图像上坐标为(x,y)的点对应于3D场景中的位置距观察位置的纵深距离,也可称为像素深度。
6、Indirect(px,py,x,y):表示对应于2D图像坐标(px,py)的场景位置对对应于2D图像坐标(x,y)的场景位置的间接光光照。
7、length(v),表示求向量v的长度。
8、dot(x,y),表示对向量x与向量y执行点积计算,得到点积结果。
在获取到原始图片后获取原始图片的颜色图、深度图与法线图。原始图片的颜色图中记录有原始图片每一个像素点的颜色值,原始图片的深度图中记录有原始图片每一个像素 点的深度值。深度图与颜色图在系统导出原始图片时可以自动获取,因此,此处不做具体解释。法线图中记录有原始图片每一个像素点的法线值。
可选地,法线图根据深度图确定具体步骤为:
设像素位置(x,y)处的法线值是N(x,y),从深度图上获取的该像素位置深度值是D(x,y)。D(x,y)为已知数据。
设c=D(x,y),L=D(x-1,y),r=D(x+1,y),u=D(x,y+1),d=D(x,y-1),
如果abs(c-L)<abs(c-r),则设minLr=abs(c-L),否则设minlr=abs(c-r),
如果abs(c-u)<abs(c-d),则设minud=abs(c-u),否则设minud=abs(c-d),
设Mid=p2w(x,y,c),
Right=p2w(x+1,y,c+minLr)-Mid,
Top=p2w(x,y+1,c+minud)-Mid,
则N(x,y)=normalize(cross(Right,Top))。
其中,c、L、r、u、d、minLr、minud、Mid、Right、Top为定义的变量,用于解释计算过程。
通过上述方法获取到了原始图片的法线图。原始图片的法线图与颜色图备用。
在获取到原始图片后,需要确定原始图片的目标虚拟光源点集。具体步骤为:
计算原始图片中每一个像素点的关联像素点。规定原始图片中一个像素点的坐标为(x,y),使用宽为N1*W,高为N2*H的分辨率计算(N1与N2通常要小于1,以提高这一过程的计算效率),设点(x,y)的关联像素点的位置为(Uvpl(x,y),Vvpl(x,y)),W为原始图片的宽,H为原始图片的高。它的计算方法是:
1、首先计算像素点(x,y)的深度值depth(x,y),如果depth(x,y)>G,则跳过后面的步骤2-4,设(Uvpl(x,y),Vvpl(x,y))=(0,0),否则继续下去。G为预先设定的控制参数。
2、分别向正x,负x,正y,负y这四个方向遍历每一个像素点(px,py)做如下步骤的操作。
3、如果该像素点的深度值depth(px,py)小于像素点(x,y)的深度值depth(x,y),则认为像素点(px,py)是比像素点(x,y)更靠近观察位置的像素点,或(px,py)是一个能够在当前观察位置(目标视角)对(x,y)能产生间接光照的点,并设定(px,py)为像素点(x,y)在该方向的一个第二像素点,并终止该方向的遍历;否则继续向给定方向的下一个像素点遍历,但是遍历最多的像素点数量不超过一个预先设定的数量。
4、找到这四个方向上最大的第二像素点(Pmaxx,Pmaxy),并设定(Uvpl(x,y),Vvpl(x,y))=(Pmaxx,Pmaxy)。如果在四个方向上都找不到一个第二像素点,则设(Uvpl(x,y),Vvpl(x,y))=(0,0)。
通过上述步骤1-4,可以获取到原始图片中每一个像素点的关联像素点。
在获取到原始图片中每一个像素点的关联像素点后,将原始图像均匀划分为长Gw宽Gh大小的子图片,即每个子图片内包含(W/Gw,H/Gh)个像素点。W与H为原始图片的长和宽,Gw与Gh为大于1的整数。对于每个子图片,遍历该子图片内的每个像素点在上一步找到的关联像素点的位置,得到一个位置的关联像素点集S,统计S中出现频率最高的M个位置得到原始虚拟光源点集Sn,将Sn包含的关联像素点的位置设置为影响该子图片最重要的间接光源。在对S统计频率最高的间接光源位置的时候,需要加入一个第二阈值t,使得2D上距离在t的两个位置(u1,v1)和(u2,v2)被认为是同一个位置(u1,v1)。如图8所示,图8中在统计到第一个位置(u1,v1)时,将以(u1,v1)为原点,t为半径的圆设置为(u1,v1)的关联区域。在统计过程中统计到位于(u1,v1)的关联区域内的其他点如(u2,v2)时,将(u1,v1)的统计次数+1而不将(u2,v2)作为一个新的点进行统计。而若是其他点如(u3,v3),(u3,v3)位于(u1,v1)的关联区域外,则(u3,v3)最做一个新的点进行统计。若是某一个点位于了(u1,v1)与(u3,v,3)两个点的关联区域内,则在该点距离(u1,v1)比距离(u3,v3)近的情况下,增加(u1,v1)的统计次数。若是该点距离(u3,v3)更近,则增加(u3,v3)的统计次数。最后将该原始图片每个子图片区域内所有的Sn合并成一个元素不重复的目标虚拟光源点集sf。
在确定出目标虚拟光源点集后,可以使用目标虚拟光源点集对原始图片进行渲染。而为了保证渲染过程更加稳定,本实施例对目标虚拟光源点集进行处理,得到第一虚拟光源点集,并使用第一虚拟光源点集对原始图片进行渲染。第一虚拟光源点集确定过程如下:
将Sf加入到数据库Data中,然后根据数据库的历史信息和Sf计算出Sf’,Sf’被认为比Sf能在时域上产生更加平滑的间接光照值。计算Sf’的具体方法是:从数据库Data中获取过去h帧内的每一帧的目标虚拟光源点集S={Sf,Sf-1,Sf-2,…Sf-h+1}。h为大于1的整数。
设置过去每帧的权重为集合W={Wf,Wf-1,Wf-2,…Wf-h+1},设置Sum(x,y)为坐标为(x,y)处的光源的权重和,遍历{Sf,Sf-1,Sf-2,…Sf-h+1}中的每一个间接光源E,得到E所在的坐标位置(Xe,Ye)和E所在的帧号i,每遍历到一个就将Sum(Xe,Ye)增加第i帧所对应的权重集合W中的权重值W(i)。最后将所有的坐标位置的Sum按照权重从大到小排序,取最前面的K个坐标位置组成原始图片的第二虚拟光源点集Sf”。
设原始图片上一帧的已处理图片的第一虚拟光源点集是Sf-1’,对Sf”和Sf-1’中的每一个元素位置进行一对一的映射,这种映射保证Sf`’和Sf-1’中的每一个元素和其映射好的元素的距离之和最小。设Sf`’中的第k个元素是Pk,其在Sf-1’中映射的元素是Pk_match,则对所有映射好的元素对(Pk,Pk_match)按照它们之间的2D距离做升序排序,找到其中 最小的n个元素对(n是可配置的常量),将Sf`’中的这n个元素替换成Sf-1’所对应的映射元素得到新的集合即为原始图片的第一虚拟光源点集Sf’。
在获取到原始图片的第一虚拟光源点集后,使用第一虚拟光源点集对原始图片进行光照渲染。
在光照渲染时,应用第一虚拟光源点集Sf’对当前图像做间接光照,对于集合Sf’中的任何一个第一虚拟光源点L,可以从已有图像信息颜色图、深度图、法线图中找到它对应的颜色Lc,虚拟三维场景中位置Lw,法线Ln,对原始图片的每一个像素点p,也能从已有颜色图、深度图、法线图中找到它对应的颜色pc,虚拟三维场景中的位置pw,法线pn,设定点光源的衰减系数为c,如果depth(p)<=第三阈值,则应用L对图像上的每一个像素点p用给定的光照模型M1利用已知信息进行间接光照计算得到光照结果pi,一种可以实施的光照模型公式如下:
pi=Lc*dot(Ln,normalize(pw-lw))*dot(pn,normalize(lw-pw))/(c*length(pw-lw))。
累计所有Sf’中的间接光源对像素点p的间接光照结果得到光照结果pi_sum,最终得到原始图片的光照图,将其叠加到原始图片的颜色图上,得到新的渲染过的综合图。然后,在如图6所示的游戏场景中,将渲染过的综合图显示在客户端中,以实现对游戏的三维虚拟场景的渲染。
需要说明的是,本实施例中在计算目标虚拟光源点集中的一个虚拟光源点对原始图片中的一个像素点的第一光照值时,还可以将目标虚拟光源点沿像素点与目标虚拟光源点的延长线移动一段距离,并且将点光源的光照辐射度的衰减系数c调低,这样做可以用点光源模拟面积光源的光照效果,提升光照质量。
通过本实施例,通过获取虚拟三维场景目标视角下的第一图片,并确定第一图片中虚拟图像的目标虚拟光源点集以及使用目标虚拟光源点集对第一图片中虚拟图像进行光照渲染,从而只需要对第一图片进行光照渲染,不需要在虚拟三维场景中对虚拟对象进行实时的渲染,降低了渲染过程的计算量,提高了对虚拟三维场景进行渲染的效率。
作为一种可选的实施方案,确定对第一图片进行光照渲染的目标虚拟光源点集包括:
S1,确定第一图片中每一个子图片的原始虚拟光源点集,其中,第一图片中包括多个子图片;
S2,将每一个子图片的原始虚拟光源点集进行合并得到目标虚拟光源点集,其中,目标虚拟光源点集中不包含重复的光源点。
可选地,本实施例中在获取到第一图片后,是将第一图片拆分成多个子图片,并计算每一个子图片的原始虚拟光源点集。如图3所示,将原始图片302拆分成多个子图片304,并计算每一个子图片304的原始虚拟光源点集,以及根据每一个子图片的原始虚拟光源点集合并为目标虚拟光源点集。
通过本实施例,通过将第一图片拆分成多个子图片,然后计算每个子图片的原始虚拟光源点集,并合并为目标虚拟光源点集,从而使目标虚拟光源点集的目标虚拟光源点相对分散,不会集中到原始图片的一个较小的区域内,从而提高了确定的目标虚拟光源点的准确性。
作为一种可选的实施方案,确定第一图片中每一个子图片的原始虚拟光源点集包括:
S1,针对第一图片中每一个子图片,确定子图片中每一个像素点的关联像素点,得到该子图片的关联像素点集;
S2,将关联像素点集中出现频率最高的前M个像素点确定为该子图片的原始虚拟光源点集,其中,M为大于零的整数。
可选地,在确定每一个像素点的关联像素点时,若是当前像素点的深度值大于了一个限定值,则需要将当前像素点的关联像素点确定为(0,0),若是当前像素点的深度值在限定值之内或等于限定值,则可以遍历当前像素点的上下左右四个方向的像素点,确定当前像素点的关联像素点。
通过本实施例,通过在确定子图片的关联像素点集时,将子图片的像素点的关联像素点中出现频率最高的几个像素点确定为原始虚拟光源点集,从而减少了需要计算的光源点数量大而造成的运算量大的问题。而且由于根据上述方法选择出的频率最高的几个像素点的重要性高,因此,也保证了确定目标虚拟光源点时的准确率。
在本申请实施例中可以采用多种方法确定子图片的关联像素点集,作为一种可选的实施方案,确定子图片中每一个像素点的关联像素点,得到该子图片的关联像素点集包括:
S1,将子图片中每一个像素点分别确定为第一像素点,执行以下操作:分别将第一像素点上、下、左、右四个方向的每个方向中,与第一像素点距离最近且深度值大于像素点深度值的一个像素点确定为第二像素点,将深度值最小的第二像素点确定为第一像素点的关联像素点;
S2,将每一个子图片中每一个像素点的关联像素点进行合并得到子图片的关联像素点集,其中,关联像素点集中包含重复的像素点。
例如,在确定每一个像素点的关联像素点时,需要遍历该像素点的上下左右四个方向上的像素点,并遍历到第一个深度值小于了该像素点深度值的像素点,确定为该像素点的第二像素点。
需要说明的是,本实施例中的一个像素点的第二像素点的数量可以为零到四个。若是第二像素点的数量为零个,则将该像素点的关联像素点设置为(0,0)。
通过本实施例,通过上述方法遍历每一个像素点的关联像素点,从而可以在较短的时间内遍历到符合条件的关联像素点,而不需要遍历所有的像素点,提高了获取每一个像素点的关联像素点的效率。
作为一种可选的实施方案,确定对第一图片进行光照渲染的目标虚拟光源点集包括:
S1,若获取到第一图片的时间距获取到第J-1张已处理图片的时间差大于或者等于第一阈值,确定第一图片中每一个子图片的原始虚拟光源点集,其中,第一图片中包括多个子图片,第一图片为图片集中的第J张图片,J为大于1的整数;
S2,将每一个子图片的原始虚拟光源点集进行合并得到目标虚拟光源点集,其中,目标虚拟光源点集中不包含重复的光源点。
可选地,本实施例中预先设定一个第一阈值。第一阈值用于控制是否计算第一图片的目标虚拟光源点。例如,以10张待进行渲染的图片,每一张图片之间间隔为0.04秒,第一阈值为0.2秒为例,10张图片中的第一张图片计算得到目标虚拟光源点,则将第一张图片的目标虚拟光源点赋予给第一到第五张图片。然后计算第六张图片的目标虚拟光源点,并将第六张图片的目标虚拟光源点赋予给第六到第十张图片。即,每隔0.2秒计算一次,而不是每一张图片都需要计算。
通过本实施例,通过设定第一阈值的方法,每隔第一阈值计算一次目标虚拟光源点,从而减少计算目标虚拟光源点的频率,提高了确定目标虚拟光源点的效率。
作为一种可选的实施方案,确定对第一图片进行光照渲染的目标虚拟光源点集还包括:
S1,若获取到第一图片的时间距获取到第J-1张已处理图片的时间差小于第一阈值,将第J-1张已处理图片的目标虚拟光源点集作为第一图片的目标虚拟光源点集,其中,第一图片为图片集中的第J张图片,J为大于1的整数。
通过本实施例,通过设定第一阈值的方法,在第一图片与前一张图片之间的时间间隔小于第一阈值的情况下,直接将前一张图片的目标虚拟光源点赋予给第一图片,从而提高了确定目标虚拟光源点的效率。
作为一种可选的实施方案,根据目标虚拟光源点集对第一图片进行光照渲染包括:
S1,获取目标虚拟光源点集对第一图片中每一个像素点的光照结果,得到第一图片的光照图,其中,光照图中记录有第一图片中每一个像素点的光照值;
S2,将光照图与第一图片的颜色图叠加,得到渲染后的第一图片。
可选地,在获取到目标虚拟光源点集后,使用目标虚拟光源点集确定第一图片中每一个像素点的光照结果,从而获取到第一图片的光照图,并结合光照图与颜色图,确定出渲染后的第一图片。在结合光照图与颜色图时,可以将调整透明度后的光照图叠加到颜色图上,或者直接将光照图叠加到颜色图上。
通过本实施例,通过将光照图叠加到颜色图上,从而实现了可以直接处理第一图片从而实现对虚拟三维场景中的虚拟对象进行光照渲染的目的,提高了光照渲染的效率。
作为一种可选的实施方案,获取目标虚拟光源点集对第一图片中每一个像素点的光照结果包括:
S1,将第一图片中每一个像素点作为第三像素点,执行以下操作,直到确定出第一图片中每一个像素点的光照结果:
S2,确定目标虚拟光源点集中每一个虚拟光源点对第三像素点的第一光照值;
S3,累加目标虚拟光源点集中每一个虚拟光源点对第三像素点的第一光照值,得到光照结果。
通过本实施例,通过确定目标虚拟光源点集中的每一个目标虚拟光源点与一个像素点的第一光照值,并叠加第一光照值得到光照结果,从而提高了获取第一图片的光照图的效率。
作为一种可选的实施方案,根据目标虚拟光源点集对所述第一图片中虚拟对象进行光照渲染的方式可以是:S1,获取图片集中位于第一图片之前的每一张已处理图片的目标虚拟光源点集,根据图片集中每一张图片的目标虚拟光源点集,确定第一图片的第二虚拟光源点集,其中,第一图片为图片集中最后一张图片,使用位于第一图片前一帧的已处理图片的第一虚拟光源点集中的N个像素点替换第一图片的第二虚拟光源点集中的N个像素点,将替换后的第二虚拟光源点集确定为第一图片的第一虚拟光源点集;使用第一图片的第一虚拟光源点集对第一图片中的虚拟对象进行光照渲染。
可选地,本实施例中在获取到目标虚拟光源点集后,并未使用目标虚拟光源点集对第一图片进行处理,而是根据图片集中的每一个图片的目标虚拟光源点集计算出第一图片的第一虚拟光源点集,并使用计算得到的第一虚拟光源点集对第一图片进行处理。
通过本实施例,通过根据图片集中每一个图片的目标虚拟光源点集确定第一图片的第一虚拟光源点集,以及根据第一虚拟光源点集对第一图片进行光照渲染。避免了渲染过程中出现闪烁的问题。
作为一种可选的实施方案,根据图片集中每一张图片的目标虚拟光源点集确定第一图片的第二虚拟光源点集包括:
S1,获取为图片集中每一张已处理图片设置的权重值;
S2,针对第一图片的目标虚拟光源点集中的每一个像素点,获取已处理图片的目标虚拟光源点集中包括该像素点的已处理图片的权重;;
S3,获取权重的和,得到第一图片的目标虚拟光源点中的每一个像素点的权重和;
S4,将第一图片的目标虚拟光源点集,权重和最大的K个像素点作为第一图片的第二虚拟光源点集,其中,K为大于零的整数。
例如,以第一图片之前包含有三张已处理图片为例,处理顺序为已处理图片1,已处理图片2,已处理图片3。权重分别为0.1、0.3、0.6。对于第一图片的目标虚拟光源点集中的一个像素点,遍历已处理图片1-3中每一张图片的目标虚拟光源点集,若是遍历到该像素点,如在已处理图片1的目标虚拟光源点集中发现了该像素点,则获取已处理图片1的权重0.1,在已处理图片3的目标虚拟光源点集中发现了该像素点,则获取已处理图片3的权重0.6,计算得到权重和为0.7。通过该方法确定出第一图片的目标虚拟光源点中的每一个像素点的权重和,并进行排序,选择其中权重和最大的几个像素点确定为第二虚拟光源点集。然而根据第一图片的前一张图片的第一虚拟光源点集与第一图片的第二虚拟光源点集确定第一图片的第一虚拟光源点集。
通过本实施例,通过上述方法确定第一虚拟光源点集,从而保证了确定第一虚拟光源点集的准确性。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述光照渲染方法的光照渲染装置。如图9所示,该装置包括:
(1)获取单元902,用于获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象;
(2)确定单元904,用于确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;
(3)渲染单元906,用于根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
可选地,上述光照渲染方法可以但不限于应用于对三维虚拟场景进行光照渲染的过程中。例如,对游戏的三维虚拟场景进行光照渲染或者对虚拟训练的三维虚拟场景进行光照渲染或者对虚拟购物的三维虚拟场景进行渲染的过程中。
以对游戏的虚拟三维场景进行光照渲染为例进行说明。在游戏的过程中,游戏运行有虚拟三维场景。虚拟三维场景中包括有待进行全局光照渲染的虚拟对象。相关技术中通常需要根据虚拟三维场景中的虚拟光源对虚拟三维场景中的虚拟对象的光照影响进行实时的计算,得到计算结果,并对虚拟对象进行全局光照渲染。然而,上述方法计算量大,对虚拟三维场景的全局光照渲染效率低。本实施例中,获取虚拟三维场景的目标视角下的第一图片,并确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集,以及使用目标虚拟光源点集对第一图片中虚拟对象进行光照渲染,从而只对第一图片进行光照渲染,不需 要再对虚拟三维场景进行实时的渲染,降低了渲染过程的计算量,提高了对虚拟三维场景进行渲染的效率。
作为一种可选的实施方案,上述确定单元包括:
(1)第一确定模块,用于确定第一图片中每一个子图片的原始虚拟光源点集,其中,第一图片中包括多个子图片;
(2)第一合并模块,用于将每一个子图片的原始虚拟光源点集合进行合并得到目标虚拟光源点集,其中,目标虚拟光源点集中不包含重复的光源点。
通过本实施例,通过将原始图片拆分成多个子图片,然后计算每个子图片的原始虚拟光源点集,并合并为目标虚拟光源点集,从而使目标虚拟光源点集的目标虚拟光源点相对分散,不会集中到原始图片的一个较小的区域内,从而提高了确定的目标虚拟光源点的准确性。
作为一种可选的实施方案,上述第一确定模块包括:
(1)第一确定子模块,用于针对第一图片中每一个子图片,确定子图片中每一个像素点的关联像素点,得到子图片的关联像素点集;
(2)第二确定子模块,用于将上述关联像素点集中出现频率最高的前M个像素点确定为子图片的上述原始虚拟光源点集,其中,M为大于零的整数。
通过本实施例,通过在确定子图片的关联像素点集时,将子图片的像素点的关联像素点中出现频率最高的几个像素点确定为原始虚拟光源点集,从而减少了需要计算的光源点数量大而造成的运算量大的问题。而且由于根据上述方法选择出的频率最高的几个像素点的重要性高,因此,也保证了确定目标虚拟光源点时的准确率。
作为一种可选的实施方案,上述第一确定子模块还用于执行以下步骤:
(1)将子图片中每一个像素点分别确定为第一像素点,执行以下操作:分别将上述第一像素点上、下、左、右四个方向的每个方向中,与第一像素点距离最近的且深度值大于上述第一像素点深度值的一个像素点确定为第二像素点,将深度值最小的第二像素点确定为上述第一像素点的关联像素点;
(2)将子图片中每一个像素点的关联像素点合并为子图片的关联像素点集,其中,关联像素点集中包含重复的像素点。
通过本实施例,通过上述方法遍历每一个像素点的关联像素点,从而可以在较短的时间内遍历到符合条件的关联像素点,而不需要遍历所有的像素点,提高了获取每一个像素点的关联像素点的效率。
作为一种可选的实施方案,上述确定单元包括:
(1)第二确定模块,用于若获取到第一图片的时间与获取到第J-1张已处理图片的时间差大于或者等于第一阈值,确定第一图片中每一个子图片的原始虚拟光源点集,其中,第一图片中包括多个子图片,第一图片为图片集中的第J张图片,J为大于1的整数;
(2)第二合并模块,用于将每一个子图片的原始虚拟光源点集进行合并得到第一图片的目标虚拟光源点集,其中,目标虚拟光源点集中不包含重复的光源点。
通过本实施例,通过设定第一阈值的方法,每隔第一阈值计算一次目标虚拟光源点,从而减少计算目标虚拟光源点的频率,提高了确定目标虚拟光源点的效率。
作为一种可选的实施方案,上述确定单元还包括:
(1)第三确定模块,用于若获取到第一图片的时间与获取到第J-1张已处理图片的时间差小于第一阈值,将第J-1张已处理图片的目标虚拟光源点集作为第一图片的目标虚拟光源点集,其中,第一图片为图片集中的第J张图片,J为大于1的整数。
通过本实施例,通过设定第一阈值的方法,在第一图片与前一张图片之间的时间间隔小于第一阈值的情况下,直接将前一张图片的目标虚拟光源点赋予给第一图片,从而提高了确定目标虚拟光源点的效率。
作为一种可选的实施方案,上述渲染单元包括:
(1)第一获取模块,用于获取目标虚拟光源点集对第一图片中每一个像素点的光照结果,得到第一图片的光照图,其中,光照图中记录有第一图片中每一个像素点的光照值;
(2)叠加模块,用于将光照图与第一图片的颜色图叠加,得到渲染后的第一图片。
通过本实施例,通过将光照图叠加到颜色图上,从而实现了可以直接处理第一图片从而实现对虚拟三维场景中的虚拟对象进行光照渲染的目的,提高了光照渲染的效率。
作为一种可选的实施方案,上述获取模块包括:
(1)执行子模块,用于将第一图片中每一个像素点作为第三像素点,执行以下操作,直到确定出第一图片中每一个像素点的光照结果:
(2)确定目标虚拟光源点集中每一个虚拟光源点对第三像素点的第一光照值;
(3)累加目标虚拟光源点集中每一个虚拟光源点对第三像素点的第一光照值,得到光照结果。
通过本实施例,通过确定目标虚拟光源点集中的每一个目标虚拟光源点与一个像素点的第一光照值,并叠加第一光照值得到光照结果,从而提高了获取第一图片的光照图的效率。
作为一种可选的实施方案,上述渲染单元用于获取图片集中位于第一图片之前的每一张已处理图片的目标虚拟光源点集,根据图片集中每一张图片的目标虚拟光源点集,确定 第一图片的第二虚拟光源点集,其中,第一图片为图片集中最后一张图片,使用位于第一图片前一帧的已处理图片的第一虚拟光源点集中的N个像素点替换第一图片的第二虚拟光源点集中的N个像素点,将替换后的第二虚拟光源点集确定为第一图片的第一虚拟光源点集;使用第一图片的第一虚拟光源点集对第一图片中虚拟对象进行光照渲染。
通过本实施例,通过根据图片集中每一个图片的目标虚拟光源点集确定第一图片的第一虚拟光源点集,以及根据第一虚拟光源点集对第一图片进行光照渲染。避免了渲染过程中出现闪烁的问题。
作为一种可选的实施方案,上述处理单元还包括:
(1)第二获取模块,用于获取为图片集中每一张已处理图片设置的权重值;
(2)第三获取模块,用于针对第一图片的目标虚拟光源点集中的每一个像素点,获取已处理图片的目标虚拟光源点集中包括像素点的已处理图片的权重;
(3)第四获取模块,用于获取权重的和,得到第一图片中每一个像素点的权重和;
(4)第四确定模块,用于将第一图片中,权重和最大的K个像素点作为第一图片的第二虚拟光源点集,其中,K为大于零的整数。
通过本实施例,通过上述方法确定第一虚拟光源点集,从而保证了确定第一虚拟光源点集的准确性。
根据本申请实施例的又一个方面,还提供了一种用于实施上述光照渲染方法的电子装置,如图10所示,该电子装置包括存储器1002和处理器1004,该存储器1002中存储有计算机程序,该处理器1004被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子装置可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象;
S2,确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;
S3,根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
可选地,本领域普通技术人员可以理解,图10所示的结构仅为示意,电子装置也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图10其并不对上述电子装置的结构造成限定。 例如,电子装置还可包括比图10中所示更多或者更少的组件(如网络接口等),或者具有与图10所示不同的配置。
其中,存储器1002可用于存储软件程序以及模块,如本申请实施例中的光照渲染方法和装置对应的程序指令/模块,处理器1004通过运行存储在存储器1002内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的光照渲染方法。存储器1002可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1002可进一步包括相对于处理器1004远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器1002具体可以但不限于用于存储第一图片与目标虚拟光源点集等信息。作为一种示例,如图10所示,上述存储器1002中可以但不限于包括上述光照渲染装置中的获取单元902、确定单元904与渲染单元906。此外,还可以包括但不限于上述光照渲染装置中的其他模块单元,本示例中不再赘述。
可选地,上述的传输装置1006用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1006包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1006为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子装置还包括:显示器1008,用于显示光照渲染后的第一图片;和连接总线1010,用于连接上述电子装置中的各个模块部件。
根据本申请的实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,获取虚拟三维场景中目标视角下的第一图片,其中,第一图片中包括目标视角下虚拟三维场景中待进行光照渲染的虚拟对象;
S2,确定对第一图片中虚拟对象进行光照渲染的目标虚拟光源点集;
S3,根据目标虚拟光源点集对第一图片中虚拟对象进行光照渲染。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种光照渲染方法,所述方法应用于终端,包括:
    获取虚拟三维场景中目标视角下的第一图片,其中,所述第一图片中包括所述目标视角下所述虚拟三维场景中待进行光照渲染的虚拟对象;
    确定对所述第一图片中所述虚拟对象进行光照渲染的目标虚拟光源点集;
    根据所述目标虚拟光源点集对所述第一图片中所述虚拟对象进行光照渲染。
  2. 根据权利要求1所述的方法,所述确定对所述第一图片中所述虚拟对象进行光照渲染的目标虚拟光源点集包括:
    确定所述第一图片中每一个子图片的原始虚拟光源点集,其中,所述第一图片中包括多个子图片;
    将所述每一个子图片的所述原始虚拟光源点集进行合并得到所述目标虚拟光源点集,其中,所述目标虚拟光源点集中不包含重复的光源点。
  3. 根据权利要求2所述的方法,所述确定所述第一图片中每一个子图片的原始虚拟光源点集包括:
    针对所述第一图片中每一个子图片,确定所述子图片中每一个像素点的关联像素点,得到所述子图片的关联像素点集;
    将所述关联像素点集中出现频率最高的前M个像素点确定为所述子图片的所述原始虚拟光源点集,其中,所述M为大于零的整数。
  4. 根据权利要求3所述的方法,所述确定所述子图片中每一个像素点的关联像素点,得到所述子图片的关联像素点集包括:
    将所述子图片中每一个像素点分别确定为第一像素点,执行以下操作:分别将所述第一像素点上、下、左、右四个方向的每个方向中,与所述第一像素点距离最近且深度值大于所述像素点深度值的一个像素点确定为第二像素点;将深度值最小的所述第二像素点确定为所述第一像素点的关联像素点;
    将所述子图片中每一个像素点的关联像素点合并为所述子图片的关联像素点集,其中,所述关联像素点集中包含重复的像素点。
  5. 根据权利要求1所述的方法,所述确定对所述第一图片中所述虚拟对象进行光照渲染的目标虚拟光源点集包括:
    若获取到所述第一图片的时间与获取到第J-1张已处理图片的时间差大于或者等于第一阈值,确定所述第一图片中每一个子图片的原始虚拟光源点集,其中,所述第一图片中包括多个子图片,所述第一图片为图片集中的第J张图片,所述J为大于1的整数;
    将所述每一个子图片的所述原始虚拟光源点集进行合并得到所述目标虚拟光源点集,其中,所述目标虚拟光源点集中不包含重复的光源点。
  6. 根据权利要求1或5所述的方法,所述确定对所述第一图片中所述虚拟对象进行光照渲染的目标虚拟光源点集还包括:
    若获取到所述第一图片的时间与获取到第J-1张已处理图片的时间差小于第一阈值,将所述第J-1张已处理图片的目标虚拟光源点集作为所述第一图片的目标虚拟光源点集,其中,所述第一图片为图片集中的第J张图片,所述J为大于1的整数。
  7. 根据权利要求1至4中任意一项所述的方法,所述根据所述目标虚拟光源点集对所述第一图片中所述虚拟对象进行光照渲染包括:
    获取所述目标虚拟光源点集对所述第一图片中每一个像素点的光照结果,得到所述第一图片的光照图,其中,所述光照图中记录有所述第一图片中每一个像素点的光照值;
    将所述光照图与所述第一图片的颜色图叠加,得到渲染后的所述第一图片。
  8. 根据权利要求7所述的方法,所述获取所述目标虚拟光源点集对所述第一图片中每一个像素点的光照结果包括:
    将所述第一图片中每一个像素点作为第三像素点,执行以下操作,直到确定出所述第一图片中每一个像素点的光照结果:
    确定所述目标虚拟光源点集中每一个虚拟光源点对所述第三像素点的第一光照值;
    累加所述目标虚拟光源点集中每一个虚拟光源点对所述第三像素点的第一光照值,得到所述光照结果。
  9. 根据权利要求1所述的方法,所述根据所述目标虚拟光源点集对所述第一图片中所述虚拟对象进行光照渲染包括:
    获取图片集中位于所述第一图片之前的每一张已处理图片的目标虚拟光源点集;
    根据所述图片集中每一张图片的目标虚拟光源点集,确定所述第一图片的第二虚拟光源点集,其中,所述第一图片为所述图片集中最后一张图片;
    使用位于所述第一图片前一帧的已处理图片的第一虚拟光源点集中的N个像素点替换所述第一图片的第二虚拟光源点集中的N个像素点,将替换后的所述第二虚拟光源点集确定为所述第一图片的第一虚拟光源点集;
    使用所述第一图片的所述第一虚拟光源点集对所述第一图片中所述虚拟对象进行光照渲染。
  10. 根据权利要求9所述的方法,所述根据所述图片集中每一张图片的目标虚拟光源点集确定所述第一图片的第二虚拟光源点集包括:
    获取为所述图片集中每一张已处理图片设置的权重值;
    针对所述第一图片的目标虚拟光源点集中的每一个像素点,获取已处理图片的目标虚拟光源点集中包括所述像素点的已处理图片的权重;
    获取所述权重的和,得到所述第一图片的目标虚拟光源点集中每一个像素点的权重和;
    将所述第一图片中,权重和最大的K个像素点作为所述第一图片的第二虚拟光源点集,其中,所述K为大于零的整数。
  11. 一种光照渲染装置,包括:
    获取单元,用于获取虚拟三维场景中目标视角下的第一图片,其中,所述第一图片中包括所述目标视角下所述虚拟三维场景中待进行光照渲染的虚拟对象;
    确定单元,用于确定对所述第一图片中所述虚拟对象进行光照渲染的目标虚拟光源点集;
    渲染单元,用于根据目标虚拟光源点集对第一图片中所述虚拟对象进行光照渲染。
  12. 根据权利要求11所述的装置,所述确定单元包括:
    第一确定模块,用于确定所述第一图片中每一个子图片的原始虚拟光源点集,其中,所述第一图片中包括多个子图片;
    第一合并模块,用于将所述每一个子图片的所述原始虚拟光源点集进行合并得到所述目标虚拟光源点集,其中,所述目标虚拟光源点集中不包含重复的光源点。
  13. 根据权利要求12所述的装置,所述第一确定模块包括:
    第一确定子模块,用于针对所述第一图片中每一个子图片,确定所述子图片中每一个像素点的关联像素点,得到所述子图片的关联像素点集;
    第二确定子模块,用于将所述关联像素点集中出现频率最高的前M个像素点确定为所述子图片的所述原始虚拟光源点集,其中,所述M为大于零的整数。
  14. 一种存储介质,所述存储介质存储有计算机程序,所述计算机程序运行时执行所述权利要求1至10任一项中所述的方法。
  15. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至10任一项中所述的方法。
  16. 一种计算机程序产品,当所述计算机程序产品被执行时,用于执行如权利要求1至10任一项所述的方法。
PCT/CN2020/088629 2019-05-17 2020-05-06 光照渲染方法和装置、存储介质及电子装置 WO2020233396A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP20810793.8A EP3971839A4 (en) 2019-05-17 2020-05-06 ILLUMINATION DISPLAY METHOD AND APPARATUS, STORAGE MEDIA AND ELECTRONIC DEVICE
JP2021536692A JP7254405B2 (ja) 2019-05-17 2020-05-06 照明レンダリング方法、装置、電子装置及びコンピュータプログラム
US17/365,498 US11600040B2 (en) 2019-05-17 2021-07-01 Illumination rendering method and apparatus, storage medium, and electronic device
US18/164,385 US11915364B2 (en) 2019-05-17 2023-02-03 Illumination rendering method and apparatus, storage medium, and electronic device
US18/427,479 US20240203045A1 (en) 2019-05-17 2024-01-30 Illumination rendering method and apparatus, storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910413385.7 2019-05-17
CN201910413385.7A CN110288692B (zh) 2019-05-17 2019-05-17 光照渲染方法和装置、存储介质及电子装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/365,498 Continuation US11600040B2 (en) 2019-05-17 2021-07-01 Illumination rendering method and apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2020233396A1 true WO2020233396A1 (zh) 2020-11-26

Family

ID=68002075

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/088629 WO2020233396A1 (zh) 2019-05-17 2020-05-06 光照渲染方法和装置、存储介质及电子装置

Country Status (5)

Country Link
US (3) US11600040B2 (zh)
EP (1) EP3971839A4 (zh)
JP (1) JP7254405B2 (zh)
CN (1) CN110288692B (zh)
WO (1) WO2020233396A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541840B (zh) * 2019-02-06 2022-03-22 佳能株式会社 确定照明效果候选的信息处理设备、方法及存储介质
CN110288692B (zh) * 2019-05-17 2021-05-11 腾讯科技(深圳)有限公司 光照渲染方法和装置、存储介质及电子装置
CN112862943B (zh) * 2021-02-03 2024-06-04 网易(杭州)网络有限公司 虚拟模型渲染方法、装置、存储介质及电子设备
CN114998504B (zh) * 2022-07-29 2022-11-15 杭州摩西科技发展有限公司 二维图像光照渲染方法、装置、系统和电子装置
CN116347003B (zh) * 2023-05-30 2023-08-11 湖南快乐阳光互动娱乐传媒有限公司 一种虚拟灯光实时渲染方法及装置
CN116485984B (zh) * 2023-06-25 2024-05-31 深圳元戎启行科技有限公司 全景影像车辆模型全局光照模拟方法、装置、设备及介质
CN118135867B (zh) * 2024-05-06 2024-07-26 成都运达科技股份有限公司 一种信号设备显示方法、驾培训练装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021020A (zh) * 2012-12-05 2013-04-03 上海创图网络科技发展有限公司 一种基于多光源的3d渲染方法
CN104008563A (zh) * 2014-06-07 2014-08-27 长春理工大学 利用虚拟点光源实现动画三维场景的全局光照绘制的方法
CN106504315A (zh) * 2016-11-17 2017-03-15 腾讯科技(深圳)有限公司 模拟全局光照的方法和装置
CN107204029A (zh) * 2016-03-16 2017-09-26 腾讯科技(深圳)有限公司 渲染方法和装置
US20180122090A1 (en) * 2016-10-31 2018-05-03 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN110288692A (zh) * 2019-05-17 2019-09-27 腾讯科技(深圳)有限公司 光照渲染方法和装置、存储介质及电子装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547372A (zh) * 2009-04-30 2009-09-30 上海晖悦数字视频科技有限公司 一种立体视频实现方法和装置
US8970601B1 (en) * 2012-08-31 2015-03-03 Kabam, Inc. System and method for generating, transmitting, and/or presenting an animation sequence
CN103606182B (zh) * 2013-11-19 2017-04-26 华为技术有限公司 图像渲染方法及装置
CN104134230B (zh) 2014-01-22 2015-10-28 腾讯科技(深圳)有限公司 一种图像处理方法、装置及计算机设备
US10417824B2 (en) * 2014-03-25 2019-09-17 Apple Inc. Method and system for representing a virtual object in a view of a real environment
US9451141B2 (en) * 2014-04-19 2016-09-20 Massachusetts Institute Of Technology Methods and apparatus for demultiplexing illumination
CN105096375B (zh) 2014-05-09 2020-03-13 三星电子株式会社 图像处理方法和设备
JP2015228186A (ja) 2014-06-02 2015-12-17 株式会社ソニー・コンピュータエンタテインメント 画像処理装置および画像処理方法
CN104966312B (zh) * 2014-06-10 2017-07-21 腾讯科技(深圳)有限公司 一种3d模型的渲染方法、装置及终端设备
CN106056661B (zh) * 2016-05-31 2018-08-28 钱进 基于Direct3D 11的三维图形渲染引擎
US10062199B2 (en) * 2016-06-27 2018-08-28 Pixar Efficient rendering based on ray intersections with virtual objects
CN106980381A (zh) * 2017-03-31 2017-07-25 联想(北京)有限公司 一种信息处理方法及电子设备
WO2020019134A1 (zh) * 2018-07-23 2020-01-30 太平洋未来科技(深圳)有限公司 光照信息的优化方法、装置及电子设备
WO2020019132A1 (zh) * 2018-07-23 2020-01-30 太平洋未来科技(深圳)有限公司 基于光线信息渲染虚拟对象的方法、装置及电子设备
CN109166170A (zh) * 2018-08-21 2019-01-08 百度在线网络技术(北京)有限公司 用于渲染增强现实场景的方法和装置
CN109364481B (zh) * 2018-10-30 2022-09-13 网易(杭州)网络有限公司 游戏内的实时全局光照方法、装置、介质及电子设备
CN109718554B (zh) * 2018-12-29 2022-08-02 深圳市创梦天地科技有限公司 一种实时渲染方法、装置及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021020A (zh) * 2012-12-05 2013-04-03 上海创图网络科技发展有限公司 一种基于多光源的3d渲染方法
CN104008563A (zh) * 2014-06-07 2014-08-27 长春理工大学 利用虚拟点光源实现动画三维场景的全局光照绘制的方法
CN107204029A (zh) * 2016-03-16 2017-09-26 腾讯科技(深圳)有限公司 渲染方法和装置
US20180122090A1 (en) * 2016-10-31 2018-05-03 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN106504315A (zh) * 2016-11-17 2017-03-15 腾讯科技(深圳)有限公司 模拟全局光照的方法和装置
CN110288692A (zh) * 2019-05-17 2019-09-27 腾讯科技(深圳)有限公司 光照渲染方法和装置、存储介质及电子装置

Also Published As

Publication number Publication date
EP3971839A4 (en) 2022-07-20
EP3971839A1 (en) 2022-03-23
JP2022515798A (ja) 2022-02-22
US20230186553A1 (en) 2023-06-15
US11600040B2 (en) 2023-03-07
JP7254405B2 (ja) 2023-04-10
CN110288692B (zh) 2021-05-11
US20210327125A1 (en) 2021-10-21
CN110288692A (zh) 2019-09-27
US20240203045A1 (en) 2024-06-20
US11915364B2 (en) 2024-02-27

Similar Documents

Publication Publication Date Title
WO2020233396A1 (zh) 光照渲染方法和装置、存储介质及电子装置
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
WO2020098530A1 (zh) 画面渲染方法和装置、存储介质及电子装置
US11012620B2 (en) Panoramic image generation method and device
JP2020533710A (ja) 画像ステッチング方法および装置、記憶媒体
US10223839B2 (en) Virtual changes to a real object
CN103198488B (zh) Ptz监控摄像机实时姿态快速估算方法
WO2022042062A1 (zh) 二维图像的三维化方法、装置、设备及计算机可读存储介质
WO2014187223A1 (en) Method and apparatus for identifying facial features
CN112000306A (zh) 多端投屏的反向控制方法、装置、设备及存储介质
CN106558103A (zh) 扩增实境图像处理系统及扩增实境图像处理方法
CN112446939A (zh) 三维模型动态渲染方法、装置、电子设备及存储介质
WO2021249401A1 (zh) 模型生成方法、图像透视图确定方法、装置、设备及介质
US20180227575A1 (en) Depth map generation device
CN110136166A (zh) 一种多路画面的自动跟踪方法
US20240202975A1 (en) Data processing
CN116109684A (zh) 面向变电场站在线视频监测二三维数据映射方法及装置
CN111710032A (zh) 变电站三维模型的构建方法、装置、设备和介质
CN113793420B (zh) 深度信息处理方法、装置、电子设备及存储介质
CN111028357B (zh) 增强现实设备的软阴影处理方法和装置
TWI831552B (zh) 圖像識別模型訓練方法、圖像深度識別方法及相關設備
CN113452981B (zh) 图像处理方法、装置、电子设备及存储介质
CN118301377A (zh) 一种全景视频的传输方法、装置、电子设备及存储介质
GB2628153A (en) Identification of inaccuracies in a depth frame/image
CN118890442A (zh) 孪生系统视频映射方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20810793

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021536692

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2020810793

Country of ref document: EP