CN113648652B - Object rendering method and device, storage medium and electronic equipment - Google Patents

Object rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113648652B
CN113648652B CN202110963740.5A CN202110963740A CN113648652B CN 113648652 B CN113648652 B CN 113648652B CN 202110963740 A CN202110963740 A CN 202110963740A CN 113648652 B CN113648652 B CN 113648652B
Authority
CN
China
Prior art keywords
light source
illumination
current
target
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110963740.5A
Other languages
Chinese (zh)
Other versions
CN113648652A (en
Inventor
袁佳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110963740.5A priority Critical patent/CN113648652B/en
Publication of CN113648652A publication Critical patent/CN113648652A/en
Application granted granted Critical
Publication of CN113648652B publication Critical patent/CN113648652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The invention discloses an object rendering method and device, a storage medium and electronic equipment. Wherein the method comprises the following steps: determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source; acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source; acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene; and rendering the photosensitive position on the target virtual object according to the target illumination data. The object rendering method solves the technical problem that the object rendering method provided by the related art has higher rendering complexity.

Description

Object rendering method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of computers, and in particular, to an object rendering method and apparatus, a storage medium, and an electronic device.
Background
Today, in many virtual game scenarios provided by gaming applications, in order to provide players with a feeling of being immersive, a game platform often simulates within the virtual game scenario numerous elemental objects in a real scene. For example, river, animal and plant, building etc. of mountain are simulated. In addition, in order to make each object in the virtual game scene look more realistic, the lighting effect of the lights in the real scene is also simulated. At present, most of virtual light sources configured in a virtual game scene by a 3D engine are usually point light sources, that is, the light sources are a beam of light rays which are projected in a cone shape from a point (light source) in a certain direction, the closer to the central axis of the light source, the more intense the light rays are, and conversely, the more sparse the light rays are, and the lower the brightness is.
However, when calculating the pixel value of the light receiving object in the virtual game scene when rendering and displaying based on the point light source, a spherical distribution function is generally adopted, and a computer is required to solve a spherical equation, so that the calculated amount is large, the real-time rendering is challenged, and the fluency of the rendered picture can be reduced. That is, the object rendering method provided by the related art has a problem of high rendering complexity.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an object rendering method and device, a storage medium and electronic equipment, which at least solve the technical problem of higher rendering complexity of the object rendering method provided by the related technology.
According to an aspect of an embodiment of the present invention, there is provided an object rendering method including: determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source; acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source; acquiring target illumination data matched with the illumination distance from an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in the virtual scene; rendering the photosensitive position on the target virtual object according to the target illumination data.
According to another aspect of the embodiment of the present invention, there is also provided an object rendering apparatus including: the first determining unit is used for determining a target virtual object to be rendered currently in the virtual scene configured with the area light source; a first obtaining unit, configured to obtain an illumination distance between a photosensitive position on the target virtual object and the surface light source; the second acquisition unit is used for acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene; and the rendering unit is used for rendering the photosensitive positions on the target virtual object according to the target illumination data.
According to a further aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-described object rendering method when run.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic apparatus including a memory in which a computer program is stored, and a processor configured to execute the above-described object rendering method by the above-described computer program.
In the embodiment of the invention, a target virtual object to be rendered currently is determined in a virtual scene configured with a surface light source, and the illumination distance between the photosensitive position on the target virtual object and the surface light source is acquired. And acquiring target illumination data matched with the illumination distance in an illumination data set generated for the surface light source in advance, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene. And then rendering the photosensitive positions on the target virtual object according to the target illumination data. That is, based on the area light source configured in the virtual scene, the illumination data irradiated by the area light source is calculated for each space position in the virtual scene in advance, and then after the target virtual object to be rendered is determined, the illumination data corresponding to the photosensitive position is directly pulled from the existing illumination data, and is rendered based on the illumination data. And spherical distribution calculation is not needed to be carried out on each photosensitive position on the target virtual object, so that the calculated amount is greatly saved, the object rendering process is simplified, and the effect of improving the rendering efficiency is achieved. And further, the problem of higher rendering complexity of the object rendering method provided by the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment of an alternative object rendering method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative object rendering method according to an embodiment of the application;
FIG. 3 is a schematic diagram of an alternative object rendering method according to an embodiment of the application;
FIG. 4 is a schematic diagram of another alternative object rendering method according to an embodiment of the application;
FIG. 5 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the application;
FIG. 6 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the application;
FIG. 7 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the application;
FIG. 8 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the application;
FIG. 9 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the application;
FIG. 10 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 12 is a flow chart of another alternative object rendering method according to an embodiment of the invention;
FIG. 13 is a schematic diagram of an alternative object rendering apparatus according to an embodiment of the present invention;
fig. 14 is a schematic structural view of an alternative electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, the following technical terms may be referred to, but are not limited to:
canvas: is part of HTML5, allowing the scripting language to dynamically render bit-images.
Threejs: the three-dimensional image processing system is a 3D engine running in a browser, can be used for creating various three-dimensional scenes in Web, and is an easy-to-use 3D image library formed by packaging and simplifying a WebGL interface, and various objects such as a camera, a light shadow and a material are included.
WebGL: the 3D drawing protocol is a drawing technical standard which allows JavaScript and OpenGL ES 2.0 to be combined together, and by adding one JavaScript binding of OpenGL ES 2.0, webGL can provide hardware 3D accelerated rendering for HTML5 Canvas, so that a Web developer can more smoothly show 3D scenes and models in a browser by means of a device graphic card, and can also create complex navigation and data visualization.
Graphics processor (Graphics Processing Unit, GPU) is a microprocessor that is dedicated to image and graphics related operations on personal computers, workstations, gaming machines, and some mobile devices (e.g., tablet computers, smartphones, etc.), and is also referred to as a display core, a vision processor, and a display chip.
Shader (also called loader): for image rendering, something can be painted or drawn on the screen. The shader replaces the traditional fixed rendering pipeline and can implement relevant computation for 3D graphics. Due to its editability, a wide variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
According to an aspect of the embodiment of the present invention, there is provided an object rendering method, optionally, as an optional implementation manner, the object rendering method may be applied, but not limited to, to an object rendering system in a hardware environment as shown in fig. 1, where the object rendering system may include, but is not limited to, a terminal device 102, a network 104, a server 106, and a database 108. The terminal device 102 has a target client (e.g., a game client as shown in fig. 1) for displaying the virtual scene running therein. The terminal device 102 includes a man-machine interaction screen, a processor and a memory. The man-machine interaction screen is used for displaying each virtual object appearing in a virtual game scene (the virtual game scene is a shooting game scene as shown in fig. 1); and is further configured to provide a human-machine interaction interface to receive human-machine interaction operations for controlling virtual objects controlled in the virtual game scene, the virtual objects to complete game tasks set in the virtual game scene. The processor is used for responding to the man-machine interaction operation to generate an interaction instruction and sending the interaction instruction to the server. The memory is used for storing relevant attribute data, such as object attribute data of a virtual object to be rendered and illumination data required for rendering.
In addition, a processing engine is included in the server 106 for performing storage or reading operations on the database 108. Specifically, the processing engine will store the attribute information corresponding to the tagged target object returned by the terminal device 102 in the database 108.
The specific process comprises the following steps: as by step S100, a virtual scene configured with the surface light source 10 is displayed in a client operating within the terminal device 102. Then, as shown in step S102, the determined identifier of the target virtual object 11 to be rendered is sent to the server 106 through the network 104. The server 106 will execute steps S104-S106 to obtain the illumination distance between the photosensitive position on the target virtual object indicated by the identifier and the surface light source, and obtain the target illumination data matched with the illumination distance in the illumination data set matched with the surface light source 10 stored in the database 108. The target illumination data is then returned to the terminal device 102 via the network 104, as in step S108. After receiving the target illumination data, the terminal device 102 will execute step S110 to render the photosensitive position on the target virtual object according to the target illumination data.
It should be noted that, in this embodiment, in a virtual scene configured with a surface light source, a target virtual object to be rendered is determined, and an illumination distance between a photosensitive position on the target virtual object and the surface light source is obtained. And acquiring target illumination data matched with the illumination distance in an illumination data set generated for the surface light source in advance, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene. And then rendering the photosensitive positions on the target virtual object according to the target illumination data. That is, based on the area light source configured in the virtual scene, the illumination data irradiated by the area light source is calculated for each space position in the virtual scene in advance, and then after the target virtual object to be rendered is determined, the illumination data corresponding to the photosensitive position is directly pulled from the existing illumination data, and is rendered based on the illumination data. And spherical distribution calculation is not needed to be carried out on each photosensitive position on the target virtual object, so that the calculated amount is greatly saved, the object rendering process is simplified, and the effect of improving the rendering efficiency is achieved. And further, the problem of higher rendering complexity of the object rendering method provided by the related technology is solved.
Alternatively, in the present embodiment, the above-mentioned terminal device may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: a mobile phone (e.g., an Android mobile phone, iOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, a MID (Mobile Internet Devices, mobile internet device), a PAD, a desktop computer, a smart television, etc. The target client may be a video client, an instant communication client, a browser client, an educational client, or the like that supports displaying a virtual scene configured with a surface light source. The network may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communications. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and is not limited in any way in the present embodiment.
Optionally, as an optional embodiment, as shown in fig. 2, the method for rendering an object includes:
s202, determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source;
It should be noted that, since the point light sources are used in the related art to illuminate the virtual scene, the edge light intensity of the point light sources decreases, and bright and dark apertures appear when the light is irradiated onto the surface of the light receiving object, and because their respective apertures overlap, the brightness of the light receiving surface will be uneven. Therefore, in the embodiment of the application, in the virtual scene configured with the surface light source, the target illumination data corresponding to each photosensitive position on the target virtual object is searched based on the illumination data set calculated in advance for different space positions, so that the rendering process of the target virtual object is rapidly completed, and the object rendering efficiency is improved.
Alternatively, in the present embodiment, the above-mentioned surface light source may be, but not limited to, a planar light source having a polygonal shape, such as a strip light, a square light, a hexagonal light source, or the like. In addition, in this embodiment, the target virtual object may be, but is not limited to, an object in the virtual scene that supports light emitted by the surface light source to be reflected and displayed for imaging, such as a character of a virtual character, a virtual animal, a virtual object, a virtual building, a virtual vehicle, and so on. The above is an example, and this is not limited in any way in the present embodiment.
S204, acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
it should be noted that, the target virtual object herein is an object occupying a certain space volume in the virtual scene, and a position (i.e., a photosensitive position) in a partial area on the surface of the target virtual object is irradiated by the surface light source, so as to display an illumination effect.
For example, as shown in fig. 3, it is assumed that a surface light source 302 is disposed in a virtual scene, where the surface light source 302 is three rectangular surface light sources, and different colors are disposed respectively. Here, the surface of the target virtual object 304 adjacent to the side of the surface light source 302 in the virtual scene will show a bright surface effect after light sensing due to irradiation, and as shown in the target virtual object 304 in fig. 3, the left surface is a dark surface and the right surface is a bright surface.
Alternatively, in the present embodiment, the illumination distance between the photosensitive position on the target virtual object and the surface light source may be, but is not limited to, the shortest line distance from the photosensitive position to the surface light source.
S206, acquiring target illumination data matched with the illumination distance from an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene;
Alternatively, in the present embodiment, the above-described illumination data set may be, but is not limited to, illumination data calculated in advance based on the illumination condition of the surface light source on each spatial position in the virtual scene, the illumination data being rendering data required for indicating that the spatial position exhibits a photosensitive effect. The target illumination data matched with the photosensitive position on the target virtual object is searched in the illumination data set, and the illumination distance of the photosensitive position relative to the surface light source can be but not limited to. That is, the mapping relationship between the spatial positions corresponding to the different illumination distances and the corresponding illumination data will be recorded in the illumination data set.
And S208, rendering the photosensitive position on the target virtual object according to the target illumination data.
Alternatively, in the present embodiment, the above-described target virtual object may be, but is not limited to being, rendered based on a shader. Wherein the shader is a function of drawing things onto a screen, which runs in the GPU of the graphics card in the terminal device. The process of rendering a drawing may be accomplished by writing a custom shading program using the programming language used by the shader, i.e., the graphic library shader language (Graphics Library Shader Language, GLSL for short). The shader may be, but is not limited to, a rendering tool that uses a predetermined drawing protocol in a three-dimensional drawing engine to perform drawing, and is configured to calculate relevant information of pixels at each photosensitive position on the photosensitive object according to a shader program, and then render the calculation result on a screen.
For example, assume that the content shown in FIG. 4 is a rendering flow of a shader employing the WebGL drawing protocol. When light irradiates the surface of a light-receiving object (such as a target virtual object), based on the WebGL protocol, a corresponding shader program is run to modify the color value of the surface pixel of the light-receiving object according to the attribute of the light emitted by the light source, so that the illumination effect of the light-receiving object irradiated by the light source is simulated. That is, the vertex shader and the fragment shader in different shading programs draw, calculate the color values of pixels in the GPU, and output the calculation result to the screen.
Alternatively, in this embodiment, the shader may be, but not limited to, a shader rectarea light uniforms lib preset in a three-dimensional engine (such as threjs) running in a browser, which uses a shader program written according to a linear transform cosine (Linearly Transformed Cosines, abbreviated as LTC) algorithm, and using the shader program, the illumination effect of a bidirectional reflectance distribution function (Bidirectional Reflectance Distribution Function, abbreviated as BRDF) can be approximately simulated, that is, the effect after light irradiates the surface of an object. The color attribute of all pixels in the light receiving area on the outer surface of the object can be changed through the coloring device, so that the illumination effect of the light source on the area can be obtained.
The code of the shader RectAreaLightUniformLib is long, and mainly encapsulates an LTC conversion matrix, and when the shader runs, the shader performs corresponding linear transformation on light according to the conversion matrix to enable the light to be projected onto the surface of an object, and then the color values of all pixels in the projection area are modified according to the attribute of the light. In addition, the inside of RectAreaLight projects planar light onto the surface of the object, all pixels in the projected area are drawn based on the shader RectAreaLightUniformLib, the shader program recalculates the color values of the individual pixels, and the calculation result is rendered on the screen in real time, so that the user can see the effect of planar light on the object.
Wherein BRDF is used to define the radiance in a given incident direction versus the radiance in a given exit direction. It describes the corresponding distribution of incident light in each outgoing direction after being reflected by a certain surface, i.e. the reflection effect of light seen by the human eye.
For example, the laser transmitter emits light downward and vertically onto the tabletop, a bright spot can be seen on the tabletop, and then the bright spot is observed from different directions, so that the brightness of the bright spot is found to change according to the different observation directions. The observation direction of eyes is kept unchanged, and the brightness of the bright spots can be observed to be changed continuously only by changing the laser emission direction. This is because the object surface has different reflectivities for different combinations of angles of incidence and angles of reflection of light. That is, BRDF defines a calculation method of brightness of light reflected to a viewpoint after the light in a scene irradiates the surface of an object material, and is a generation algorithm of surface color after the object receives the light. For example, as shown in FIG. 5, n is a normal vector, w i As incident ray vector, w o For the outgoing ray vector, the BRDF is used for solving the vector at w i And w o Angle combined lower outgoing ray w o Is a brightness algorithm of the display device.
However, the BRDF algorithm adopts a spherical distribution function, and when rendering is performed by using a polygonal area light source (namely, calculating the color value of each pixel point on the surface of an object when the light irradiates the object), the BRDF needs to be integrated on a polygonal area covered by a light source, as shown in fig. 6, the BRDF needs to be solved by a computer, and the spherical equation when the polygonal area light source irradiates the surface of a sphere is required to be solved.
To overcome this difficulty of real-time computation, in this embodiment, the BRDF algorithm may be approximated by, but not limited to, using an LTC algorithm (also referred to as linear mapping), so as to reduce the computation overhead and calculate the result in real-time on the premise of ensuring the smoothness of the picture. The LTC algorithm here is a special mapping between two vector spaces that keeps vector addition and scalar multiplication. The linear transformation is to map all points in the same coordinate system into the coordinate system of another vector space according to a certain transformation rule. For example, as shown in fig. 7. The point X is transformed into the T (X) position of the value range through a function T, and all points of the 'definition range' in the figure can be mapped into the 'value range' through T transformation. In this embodiment, the problem of illumination rendering of a projection area formed on the surface of the object by using the LTC algorithm may be solved, so that the spherical equation for calculating BRDF is simplified to be a linear equation, and the calculation complexity is reduced.
In this embodiment, illumination data formed by irradiating the surface light source onto each spatial position is calculated in advance based on the LTC algorithm, and an illumination data set is obtained. Therefore, when the light receiving object (namely the target virtual object) is hit by the light of the surface light source, the corresponding illumination data can be directly pulled from the illumination data set based on the corresponding illumination distance to complete photosensitive rendering, so that the target virtual object can rapidly and efficiently display the illumination rendering effect illuminated by the surface light source.
According to the embodiment of the application, the current target virtual object to be rendered is determined in the virtual scene configured with the area light source, and the illumination distance between the photosensitive position on the target virtual object and the area light source is obtained. And acquiring target illumination data matched with the illumination distance in an illumination data set generated for the surface light source in advance, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene. And then rendering the photosensitive positions on the target virtual object according to the target illumination data. That is, based on the area light source configured in the virtual scene, the illumination data irradiated by the area light source is calculated for each space position in the virtual scene in advance, and then after the target virtual object to be rendered is determined, the illumination data corresponding to the photosensitive position is directly pulled from the existing illumination data, and is rendered based on the illumination data. And spherical distribution calculation is not needed to be carried out on each photosensitive position on the target virtual object, so that the calculated amount is greatly saved, the object rendering process is simplified, and the effect of improving the rendering efficiency is achieved. And further, the problem of higher rendering complexity of the object rendering method provided by the related technology is solved.
As an alternative, in the virtual scene configured with the area light source, determining the target virtual object to be rendered currently includes:
s1, when light rays emitted by a surface light source are projected to a virtual scene, determining a virtual object hit by the light rays as a target virtual object, and determining a surface area of the projection of the light rays on the target virtual object as a photosensitive area, wherein the photosensitive area comprises a photosensitive position.
Specifically described in connection with the example shown in fig. 8: assuming that a surface light source shown on the right side in fig. 8 is arranged in the virtual scene, ray detection is performed based on light rays emitted from the surface light source, and illumination data of the virtual scene, which are irradiated to respective spatial positions in a space provided by the virtual scene, are calculated based on detection results. For example, as shown in fig. 8, the light positions at different illumination distances form different illumination planes (e.g., the diagonally filled areas in the figure), and the corresponding illumination data is calculated at each position on the illumination planes.
When the light hits the target virtual object as the light receiving object, determining the hit surface area, such as the white area of the target virtual object in fig. 8, as the light receiving area, wherein the position in the light receiving area is the light receiving position, then searching the illumination data required for rendering the light receiving position from the pre-calculated illumination data according to the illumination distance, and adjusting and rendering the color value on the pixel point of the light receiving position according to the searched illumination data so as to present the illumination effect of the light receiving position after being illuminated by the surface light source.
According to the embodiment of the application, when the light emitted by the surface light source is projected to the virtual scene, the virtual object hit by the light is determined as the target virtual object, and the surface area of the projection of the light on the target virtual object is determined as the photosensitive area, so that the illumination data corresponding to the photosensitive position in the photosensitive area is conveniently acquired for rendering, and the illumination effect of the target virtual object after being illuminated by the surface light source is accurately presented.
As an alternative, obtaining the illumination distance between the photosensitive position on the target virtual object and the surface light source includes:
s1, traversing each photosensitive position in a photosensitive area to serve as a current photosensitive position respectively, and executing the following operations:
s11, calculating the distance between the current photosensitive position and the target point position in the area light source, wherein the target point position is the point position pointed by the shortest connecting line between the current photosensitive position and the area light source;
and S12, determining the distance as the illumination distance.
Specifically, the following description is made with reference to the example shown in fig. 9: each photosensitive position in the photosensitive region (white region of the target virtual object as shown in the figure) is traversed, and the illumination distance is calculated by taking it as the current photosensitive position, respectively.
Taking a photosensitive position a as an example, determining a target point position a' corresponding to the photosensitive position a in a surface light source, calculating a distance L of a shortest connecting line between the two, and then determining the distance L as an illumination distance between the two, so as to search illumination data needed for rendering the photosensitive position in an illumination data set based on the illumination distance.
According to the embodiment provided by the application, the distance between the photosensitive position on the target virtual object and the target point position in the surface light source is calculated and used as the illumination distance for searching the corresponding illumination data from the illumination data set. Therefore, the data required by rendering the light-receiving object can be obtained rapidly and accurately, and the rendering efficiency of the light-receiving object can be improved.
As an alternative, rendering the photosensitive position on the target virtual object according to the target illumination data includes:
s1, extracting an image rendering parameter value corresponding to a target space position from target illumination data, wherein the image rendering parameter value comprises the following components: color value and transparency;
s2, adjusting the rendering parameter value of each pixel on the photosensitive position corresponding to the target space position according to the image rendering parameter value to obtain an adjusted rendering parameter value;
And S3, performing picture rendering according to the adjusted rendering parameter value.
For example, rendering is performed using a shader RectAreaLightUniformsLib preset in a three-dimensional engine (e.g., threejs) in a browser. The linear conversion matrix M packaged in the shader carries out corresponding linear conversion treatment on the light emitted by the surface light source to make the light projected onto the surface of an object, and then the color value and the transparency of each pixel in a projection area (namely a photosensitive area on a target virtual object) are modified by combining the attribute information (such as light source color, light source shape and area, light source light intensity and the like) of the surface light source: dataTexture (LTC_MAT_1, LTC_MAT_2). Color values (Red Green Blue, RGB values) and transparency of the respective pixels are recorded in ltc_mat_1, and normal vector and position coordinate information of the respective pixels are recorded in ltc_mat_2.
The original rendering parameter values when the surface light source is not irradiated are configured at each pixel point on the target virtual object. In this embodiment, when it is detected that the surface area of the target virtual object is irradiated by the above-configured surface light source, the image rendering parameter values of the pixel points at the photosensitive positions in the photosensitive area formed by projection are determined according to the above-described calculation process, so as to adjust the original rendering parameter values, and obtain adjusted rendering parameter values.
According to the embodiment provided by the application, after the image rendering parameter value corresponding to the current target space position is extracted from the target illumination data, the rendering parameter value of each pixel on the photosensitive position corresponding to the target space position is adjusted according to the image rendering parameter value, so that the adjusted rendering parameter value is obtained; and performing picture rendering according to the adjusted rendering parameter value. And complicated rendering calculation is not needed to be carried out on the photosensitive position, so that the effect of improving the rendering efficiency is achieved.
As an alternative, before determining the target virtual object to be rendered in the virtual scene configured with the area light source, the method further includes:
s1, constructing an illumination equation corresponding to each space position based on illumination results of each point light source on the surface light source irradiating each space position in the virtual scene;
s2, performing linear transformation solving calculation on the illumination equation to obtain illumination data corresponding to the space position;
and S3, constructing an illumination data set based on illumination data corresponding to each spatial position.
Optionally, in this embodiment, step S1, based on the illumination result of each point light source on the surface light source irradiating each spatial position in the virtual scene, constructing the illumination equation corresponding to each spatial position includes:
S11, sequentially taking each space position as a current coloring point position, and determining a current illumination equation corresponding to the current coloring point position;
s12, traversing all point light sources in the surface light source, and taking each point light source as a current point light source in sequence;
s12-1, determining a current light vector of the current point light source irradiated to the current color point position and a current normal vector corresponding to the current color point position under the condition that the current point light source is not the last point light source in the surface light source;
s12-2, calculating a current included angle between the current light vector and the current normal vector;
s12-3, acquiring a linear conversion matrix packaged in the shader, wherein element parameter values in the linear conversion matrix are parameter values with error values within a target interval, which are obtained through multiple times of training;
s12-4, constructing a current object illumination equation of the current point light source illuminated to the current colored point position based on the current light vector, the linear transformation matrix and the cosine value of the current included angle;
s12-5, performing integral calculation on the object illumination equation corresponding to each point light source so as to construct a current illumination equation.
The following examples are specifically described:
the calculation formula of the illumination of the gloss reflection of the surface light source is assumed as follows:
Wherein S is a surface light source region, S is a point on the surface light source region, p is a coloring point, n p Is the color point normal, w i Is the vector of s to p, θ p Is n p And w is equal to i Is included in the bearing.
The calculation formula of diffuse reflection illumination of the surface light source is as follows:
wherein θ p Is the normal n of the s point on the surface light source s And-w i Is included in the bearing. Equation (1) is similar to equation (2), but the gloss reflection is increased by complex BRDF and the textured color L on the surface light source.
The simplified processing of the complex illumination equation described above can be accomplished here by a linear transformation matrix, since cos (θ s ) And f (p, w) i ,w o ) Are spherical distribution functions, and thus can be expressed according to the following formula:
f(p,w i ,w o )≈M*cos(θ s ) (3)
wherein M is a linear transformation matrix, that is, any one of f (p, w i ,w o ) One must find an M transformation matrix to convert it to cos (θ s ). The linear transformation here is to multiply the incident vector by a matrix M, i.e.:
wherein w is i =Mw o ,w o =M -1 w iIs jacobianIt is the fact that the area changes after the linear transformation to perform a normalization operation. This term is also pre-computed in advance as the M matrix and is present in the texture for easy sampling.
The M matrix here may be, but not limited to, a sufficient matrix with sufficiently small errors to find a conditional matrix by traversing all the matrices as the M matrix. For example, a matrix M is randomly initialized using a concept similar to gradient descent, and then an optimized gradient (i.e., an adjusted optimization to obtain a corrected matrix M) is obtained through calculation, and the process is repeated until the error falls within a specified numerical range.
Based on the illumination equation constructed above, the above equation (4) is taken into equation (1) to obtain:
the above formula (5) is used as an illumination equation constructed for the surface light source in the present embodiment, and then the equation is linearly solved and calculated to obtain rendering parameter values such as color values and transparency required for rendering.
According to the embodiment of the application, the illumination data required by rendering is calculated by the illumination equation constructed for the surface light source through the linear transformation algorithm, and spherical calculation is not needed, so that the rendering operation amount is greatly reduced, and the purpose of improving the rendering efficiency is achieved.
As an alternative, before determining the target virtual object to be rendered in the virtual scene configured with the area light source, the method further includes:
s1, setting luminous attribute information of a surface light source in a virtual scene, wherein the luminous attribute information comprises at least one of the following: the color of the area light source, the illumination intensity of the area light source and the light emitting area of the area light source.
It should be noted that, the surface light source in this embodiment may, but is not limited to, configure light emission attribute information thereof in advance, and further calculate illumination data of each spatial position within the virtual scene in combination with the information.
For example, here, the planar light source RectAreaLight packaged by the Threejs engine is taken as an example, and it is a planar light source implemented by using a RectAreaLightUniformLib shader, and can uniformly emit light from a rectangular plane. The light source can be set with the color, intensity and light emitting area of light using rectarea light, and the display effect of the surface light source can be as shown in fig. 10 to 11.
The light source is wrapped by a frame, so that the light source looks more like a plane light source, and a light emitting surface of light can be defined, for example, the light emitting surface emits light only on one side and does not emit light on the other side.
The rendering process in the embodiment of the present application is fully described with reference to the following flowchart shown in fig. 11:
as in step S1202, the color, intensity, and light emitting area of the surface light source are set according to the project requirements.
In step S1204, a shader program is run that encapsulates the LTC algorithm.
Wherein the following steps are implemented in the shader:
and S1204-1, projecting light emitted by the surface light source to the object surface of the light receiving object to form a projection area.
S1204-2, each pixel within the projection area is extracted.
S1214-3, the color value of each pixel is modified to obtain the new color after adjustment.
And S1214-4, rendering the adjusted new color on a screen in real time.
The flow shown in fig. 12 is an example, and the steps in the flow are not limited in this embodiment.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
According to another aspect of the embodiment of the present invention, there is also provided an object rendering apparatus for implementing the above object rendering method. As shown in fig. 13, the apparatus includes:
a first determining unit 1302, configured to determine, in a virtual scene configured with a surface light source, a target virtual object to be rendered currently;
A first obtaining unit 1304 for obtaining an illumination distance between a photosensitive position on the target virtual object and the surface light source;
a second obtaining unit 1306, configured to obtain, in an illumination data set that matches a surface light source, target illumination data that matches an illumination distance, where the illumination data set includes illumination data corresponding to each of different spatial positions in the virtual scene;
a rendering unit 1308 for rendering the photosensitive position on the target virtual object according to the target illumination data.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
As an alternative, the first determining unit 1302 includes:
and the first determining module is used for determining a virtual object hit by the light as a target virtual object when the light emitted by the surface light source is projected to the virtual scene, and determining a surface area of the projection of the light on the target virtual object as a photosensitive area, wherein the photosensitive area comprises a photosensitive position.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
As an alternative, the first acquisition unit 1304 includes:
The first processing module is used for traversing each photosensitive position in the photosensitive area, respectively serving as the current photosensitive position, and executing the following operations:
s1, calculating the distance between the current photosensitive position and the target point position in the area light source, wherein the target point position is the point position pointed by the shortest connecting line between the current photosensitive position and the area light source;
s2, determining the distance as the illumination distance.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
As an alternative, the rendering unit 1308 includes:
the extraction module is used for extracting an image rendering parameter value corresponding to the current target space position from the target illumination data, wherein the image rendering parameter value comprises: color value and transparency;
the adjusting module is used for adjusting the rendering parameter value of each pixel on the photosensitive position corresponding to the target space position according to the image rendering parameter value to obtain an adjusted rendering parameter value;
and the rendering module is used for performing picture rendering according to the adjusted rendering parameter value.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
As an alternative, the method further comprises:
the first construction unit is used for constructing an illumination equation corresponding to each space position based on illumination results of each point light source on the surface light source when each space position in the virtual scene is illuminated before the current target virtual object to be rendered is determined in the virtual scene configured with the surface light source;
the computing unit is used for carrying out linear transformation solving computation on the illumination equation so as to obtain illumination data corresponding to the space position;
the second construction unit is used for constructing an illumination data set based on illumination data corresponding to each spatial position.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
As an alternative, the first building unit includes:
the second determining module is used for sequentially taking each space position as a current coloring point position and determining a current illumination equation corresponding to the current coloring point position;
the second processing module is used for traversing all the point light sources in the surface light source, and taking each point light source as a current point light source in sequence;
under the condition that the current point light source is not the last point light source in the surface light source, determining a current light vector of the current point light source irradiated to the current colored point position and a current normal vector corresponding to the current colored point position;
Calculating a current included angle between the current ray vector and the current normal vector;
acquiring a linear conversion matrix packaged in the shader, wherein element parameter values in the linear conversion matrix are parameter values, wherein error values obtained through multiple times of training are located in a target interval;
based on the current ray vector, the linear transformation matrix and the cosine value of the current included angle, constructing a current object illumination equation of the current point light source illuminated to the current colored point position;
and carrying out integral calculation on the object illumination equation corresponding to each point light source so as to construct a current illumination equation.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
As an alternative, the method further comprises:
the setting unit is used for setting the luminous attribute information of the area light source in the virtual scene before the current target virtual object to be rendered is determined in the virtual scene configured with the area light source, wherein the luminous attribute information comprises at least one of the following: the color of the area light source, the illumination intensity of the area light source and the light emitting area of the area light source.
In this embodiment, for an embodiment implemented by each module unit, reference may be made to the above method embodiment, which is not described herein again.
According to still another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the above object rendering method, where the electronic device may be a terminal device or a server as shown in fig. 1. The present embodiment is described taking the electronic device as a terminal device as an example. As shown in fig. 14, the electronic device comprises a memory 1402 and a processor 1404, the memory 1402 having stored therein a computer program, the processor 1404 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, determining a current target virtual object to be rendered in a virtual scene configured with a surface light source;
s2, acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
s3, acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene;
And S4, rendering the photosensitive position on the target virtual object according to the target illumination data.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 14 is only schematic, and the electronic device may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 14 is not limited to the structure of the electronic device and the electronic apparatus described above. For example, the electronics can also include more or fewer components (e.g., network interfaces, etc.) than shown in fig. 14, or have a different configuration than shown in fig. 14.
The memory 1402 may be used to store software programs and modules, such as program instructions/modules corresponding to the object rendering methods and apparatuses in the embodiments of the present invention, and the processor 1404 executes the software programs and modules stored in the memory 1402 to perform various functional applications and data processing, i.e., implement the above-mentioned object rendering methods. Memory 1402 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1402 may further include memory located remotely from processor 1404, which may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. Wherein memory 1402 may specifically, but not be limited to, information such as illumination data for different locations. As an example, as shown in fig. 14, the memory 1402 may include, but is not limited to, the first determining unit 1302, the first acquiring unit 1304, the second acquiring unit 1306, and the rendering unit 1308 in the object rendering device. In addition, other module units in the object rendering apparatus may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1406 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 1406 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 1406 is a Radio Frequency (RF) module that is used to communicate wirelessly with the internet.
In addition, the electronic device further includes: a display 1408 for displaying the virtual scene and presenting an illumination effect of the target virtual object illuminated by the surface light source; and a connection bus 1410 for connecting the respective module parts in the above-described electronic device.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from a computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the above-described object rendering method. Wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, determining a current target virtual object to be rendered in a virtual scene configured with a surface light source;
s2, acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
s3, acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different space positions in the virtual scene;
and S4, rendering the photosensitive position on the target virtual object according to the target illumination data.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (9)

1. An object rendering method, comprising:
based on the light vector of each colored point in the virtual scene configured with the surface light source irradiated by each point light source, the included angle between each light vector and the normal vector corresponding to each colored point and the linear transformation matrix, constructing the illumination equation corresponding to each colored point, comprising: sequentially taking each space position in the virtual scene as a current color point position, and determining a current illumination equation corresponding to the current color point position;
Performing linear transformation solving calculation on the illumination equation to obtain illumination data corresponding to the space position;
constructing an illumination data set matched with the surface light source based on illumination data corresponding to each spatial position, wherein the illumination data set comprises illumination data corresponding to each different spatial position in the virtual scene;
determining a target virtual object to be rendered currently in the virtual scene;
acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
acquiring target illumination data matched with the illumination distance in the illumination data set;
and rendering the photosensitive position on the target virtual object according to the target illumination data.
2. The method of claim 1, wherein determining, in the virtual scene, a target virtual object to be currently rendered comprises:
and when the light rays emitted by the surface light source are projected to the virtual scene, determining a virtual object hit by the light rays as the target virtual object, and determining a surface area of projection of the light rays on the target virtual object as a photosensitive area, wherein the photosensitive area comprises the photosensitive position.
3. The method of claim 2, wherein obtaining the illumination distance between the photosensitive position on the target virtual object and the surface light source comprises:
traversing each photosensitive position in the photosensitive area to serve as a current photosensitive position respectively, and executing the following operations:
calculating the distance between the current photosensitive position and a target point position in the area light source, wherein the target point position is the point position pointed by the shortest connecting line from the current photosensitive position to the area light source;
and determining the distance as the illumination distance.
4. The method of claim 1, wherein rendering the photosensitive position on the target virtual object in accordance with the target illumination data comprises:
extracting an image rendering parameter value corresponding to the current target space position from the target illumination data, wherein the image rendering parameter value comprises: color value and transparency;
according to the image rendering parameter values, the rendering parameter values of all pixels on the photosensitive positions corresponding to the target space positions are adjusted, and adjusted rendering parameter values are obtained;
and performing picture rendering according to the adjusted rendering parameter value.
5. The method of claim 1, wherein constructing the illumination equation for each of the color points based on the light vectors of each of the color points illuminated by each of the point light sources on the surface light source in the virtual scene configured with the surface light source, the angles between each of the light vectors and the normal vectors corresponding to each of the color points, and the linear transformation matrix comprises:
traversing all the point light sources in the surface light source, and taking each point light source as a current point light source in sequence;
determining a current light vector of the current point light source irradiated to the current color point position and a current normal vector corresponding to the current color point position under the condition that the current point light source is not the last point light source in the surface light source;
calculating a current included angle between the current ray vector and the current normal vector;
acquiring the linear transformation matrix packaged in the shader, wherein element parameter values in the linear transformation matrix are parameter values with error values within a target interval, wherein the error values are obtained through multiple times of training;
constructing a current object illumination equation of the current point light source illuminated to the current colored point position based on the current light vector, the linear transformation matrix and the cosine value of the current included angle;
And carrying out integral calculation on the object illumination equation corresponding to each point light source so as to construct the current illumination equation.
6. The method of any one of claims 1 to 5, further comprising, prior to determining a target virtual object to be currently rendered in the virtual scene:
setting luminous attribute information of the surface light source in the virtual scene, wherein the luminous attribute information comprises at least one of the following: the color of the area light source, the illumination intensity of the area light source and the light emitting area of the area light source.
7. An object rendering apparatus, comprising:
the device is used for constructing illumination equations corresponding to the coloring points respectively based on light vectors of the coloring points, included angles between the light vectors and normal vectors corresponding to the coloring points and a linear conversion matrix when the point light sources on the surface light source are irradiated to the virtual scene configured with the surface light source, and comprises the following components: sequentially taking each space position in the virtual scene as a current color point position, and determining a current illumination equation corresponding to the current color point position;
The device is further used for carrying out linear transformation solving calculation on the illumination equation so as to obtain illumination data corresponding to the space position;
the device is further used for constructing an illumination data set matched with the surface light source based on illumination data corresponding to each spatial position, wherein the illumination data set comprises illumination data corresponding to each different spatial position in the virtual scene;
the first determining unit is used for determining a target virtual object to be rendered currently in the virtual scene;
the first acquisition unit is used for acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
the second acquisition unit is used for acquiring target illumination data matched with the illumination distance in the illumination data set;
and the rendering unit is used for rendering the photosensitive position on the target virtual object according to the target illumination data.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program, when run, performs the method of any one of claims 1 to 6.
9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 6 by means of the computer program.
CN202110963740.5A 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment Active CN113648652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110963740.5A CN113648652B (en) 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110963740.5A CN113648652B (en) 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113648652A CN113648652A (en) 2021-11-16
CN113648652B true CN113648652B (en) 2023-11-14

Family

ID=78491984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110963740.5A Active CN113648652B (en) 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113648652B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022607B (en) * 2021-11-19 2023-05-26 腾讯科技(深圳)有限公司 Data processing method, device and readable storage medium
CN115393499B (en) * 2022-08-11 2023-07-25 广州极点三维信息科技有限公司 3D real-time rendering method, system and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346918A (en) * 2011-09-30 2012-02-08 长春理工大学 Method for drawing three-dimensional animation scene only containing object change
CN102819860A (en) * 2012-08-16 2012-12-12 北京航空航天大学 Real-time global illumination method for sub-surface scattering object on the basis of radiosity
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN106056658A (en) * 2016-05-23 2016-10-26 珠海金山网络游戏科技有限公司 Virtual object rendering method and virtual object rendering device
CN107452048A (en) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 The computational methods and device of global illumination
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN109364481A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 Real-time global illumination method, apparatus, medium and electronic equipment in game
CN111632378A (en) * 2020-06-08 2020-09-08 网易(杭州)网络有限公司 Illumination map making method, game model rendering method, illumination map making device, game model rendering device and electronic equipment
CN112755535A (en) * 2021-02-05 2021-05-07 腾讯科技(深圳)有限公司 Illumination rendering method and device, storage medium and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2992590A1 (en) * 2015-07-17 2017-01-26 Abl Ip Holding Llc Systems and methods to provide configuration data to a software configurable lighting device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346918A (en) * 2011-09-30 2012-02-08 长春理工大学 Method for drawing three-dimensional animation scene only containing object change
CN102819860A (en) * 2012-08-16 2012-12-12 北京航空航天大学 Real-time global illumination method for sub-surface scattering object on the basis of radiosity
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN106056658A (en) * 2016-05-23 2016-10-26 珠海金山网络游戏科技有限公司 Virtual object rendering method and virtual object rendering device
CN107452048A (en) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 The computational methods and device of global illumination
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN109364481A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 Real-time global illumination method, apparatus, medium and electronic equipment in game
CN111632378A (en) * 2020-06-08 2020-09-08 网易(杭州)网络有限公司 Illumination map making method, game model rendering method, illumination map making device, game model rendering device and electronic equipment
CN112755535A (en) * 2021-02-05 2021-05-07 腾讯科技(深圳)有限公司 Illumination rendering method and device, storage medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
过 洁, 潘金贵.复杂面光源下实时绘制研究.《系统仿真学报》.2012,第第24卷卷(第第1期期),第6-11页. *

Also Published As

Publication number Publication date
CN113648652A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
CN108564646B (en) Object rendering method and device, storage medium and electronic device
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
US11024077B2 (en) Global illumination calculation method and apparatus
CN113648652B (en) Object rendering method and device, storage medium and electronic equipment
JP5839907B2 (en) Image processing apparatus and image processing method
CN112116692A (en) Model rendering method, device and equipment
CN109448089A (en) A kind of rendering method and device
CN109785423A (en) Image light compensation method, device and computer equipment
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
CN108043027B (en) Storage medium, electronic device, game screen display method and device
CN114119818A (en) Rendering method, device and equipment of scene model
US20230230311A1 (en) Rendering Method and Apparatus, and Device
US20230368459A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
US9704290B2 (en) Deep image identifiers
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
CN105976423B (en) A kind of generation method and device of Lens Flare
CN116485973A (en) Material generation method of virtual object, electronic equipment and storage medium
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN116543094A (en) Model rendering method, device, computer readable storage medium and electronic equipment
US9626791B2 (en) Method for representing a participating media in a scene and corresponding device
WO2022042003A1 (en) Three-dimensional coloring method and apparatus, and computing device and storage medium
KR102235679B1 (en) Device and method to display object with visual effect
US11380048B2 (en) Method and system for determining a spectral representation of a color
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
AU2020368983B2 (en) Method and system for rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40054541

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant