CN113648652A - Object rendering method and device, storage medium and electronic equipment - Google Patents

Object rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113648652A
CN113648652A CN202110963740.5A CN202110963740A CN113648652A CN 113648652 A CN113648652 A CN 113648652A CN 202110963740 A CN202110963740 A CN 202110963740A CN 113648652 A CN113648652 A CN 113648652A
Authority
CN
China
Prior art keywords
light source
illumination
target
rendering
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110963740.5A
Other languages
Chinese (zh)
Other versions
CN113648652B (en
Inventor
袁佳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110963740.5A priority Critical patent/CN113648652B/en
Publication of CN113648652A publication Critical patent/CN113648652A/en
Application granted granted Critical
Publication of CN113648652B publication Critical patent/CN113648652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses an object rendering method and device, a storage medium and electronic equipment. Wherein, the method comprises the following steps: determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source; acquiring an illumination distance between a photosensitive position on a target virtual object and a surface light source; acquiring target illumination data matched with an illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in a virtual scene; and rendering the photosensitive position on the target virtual object according to the target illumination data. The invention solves the technical problem of higher rendering complexity of the object rendering method provided by the related technology.

Description

Object rendering method and device, storage medium and electronic equipment
Technical Field
The invention relates to the field of computers, in particular to an object rendering method and device, a storage medium and electronic equipment.
Background
Nowadays, in a virtual game scene provided by many game applications, in order to provide a player with a sense of being personally on the scene, a game platform often simulates a plurality of element objects in a real scene in the virtual game scene. For example, a mountain river, an animal, a plant, a building and the like can be simulated. In addition, in order to make each object in the virtual game scene look more real, the luminous effect of the luminous body in the real scene can be simulated. At present, most of virtual light sources configured in a virtual game scene by a 3D engine are usually point light sources, that is, the light source projects a bundle of light rays in a cone shape from one point (light source) to a certain direction, the light rays are denser near the light source central axis and have higher brightness, and conversely, the light rays are sparser near the edge and have lower brightness.
However, when the pixel values of the light-receiving object objects in the virtual game scene are calculated based on the point light sources, a spherical distribution function is generally adopted, a computer is required to solve a spherical equation, the calculation amount is large, a challenge is brought to real-time rendering, and the fluency of a rendered picture can be reduced. That is, the object rendering method provided by the related art has a problem of high rendering complexity.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an object rendering method and device, a storage medium and electronic equipment, and aims to at least solve the technical problem that the object rendering method provided by the related technology is high in rendering complexity.
According to an aspect of an embodiment of the present invention, there is provided an object rendering method including: determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source; acquiring the illumination distance between a photosensitive position on the target virtual object and the surface light source; acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in the virtual scene; rendering the photosensitive position on the target virtual object according to the target illumination data.
According to another aspect of the embodiments of the present invention, there is also provided an object rendering apparatus, including: the device comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source; a first acquisition unit configured to acquire an illumination distance between a photosensitive position on the target virtual object and the surface light source; a second obtaining unit, configured to obtain, in an illumination data set that matches the area light source, target illumination data that matches the illumination distance, where the illumination data set includes illumination data corresponding to different spatial positions in the virtual scene; and the rendering unit is used for rendering the photosensitive position on the target virtual object according to the target illumination data.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above object rendering method when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores therein a computer program, and the processor is configured to execute the object rendering method by the computer program.
In the embodiment of the invention, in a virtual scene configured with a surface light source, a target virtual object to be rendered currently is determined, and the illumination distance between a photosensitive position on the target virtual object and the surface light source is obtained. And acquiring target illumination data matched with the illumination distance in an illumination data set generated for the surface light source in advance, wherein the illumination data set comprises illumination data corresponding to different spatial positions in a virtual scene. And then, rendering the photosensitive position on the target virtual object according to the target illumination data. That is to say, based on the area light source configured in the virtual scene, illumination data irradiated by the area light source is calculated in advance for each spatial position in the virtual scene, and then after a target virtual object to be rendered is determined, the illumination data corresponding to the photosensitive position is directly pulled from the existing illumination data, and rendering is performed based on the illumination data. And spherical distribution calculation does not need to be carried out on all photosensitive positions on the target virtual object, so that the calculation amount is greatly saved, the object rendering process is simplified, and the effect of improving the rendering efficiency is achieved. And further the problem of high rendering complexity of the object rendering method provided by the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment for an alternative object rendering method according to an embodiment of the invention;
FIG. 2 is a flow diagram of an alternative method of object rendering according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative object rendering method according to an embodiment of the invention;
FIG. 4 is a schematic diagram of an alternative object rendering method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 7 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 8 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 9 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 10 is a schematic diagram of an alternative object rendering method according to yet another embodiment of the invention;
FIG. 11 is a schematic diagram of yet another alternative object rendering method according to an embodiment of the invention;
FIG. 12 is a flow diagram of an alternative object rendering method according to an embodiment of the invention;
FIG. 13 is a schematic structural diagram of an alternative object rendering apparatus according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, the following technical terms may be referred to, but are not limited to:
a Canvas: is part of HTML5 that allows scripting languages to dynamically render bit images.
Threejs: the method is a 3D engine running in a browser, can be used for creating various three-dimensional scenes in Web, is an easy-to-use 3D graphic library formed by packaging and simplifying a WebGL interface, and comprises various objects such as a camera, light and shadow, materials and the like.
WebGL: the 3D drawing protocol is a 3D drawing protocol, the drawing technical standard allows JavaScript and OpenGL ES 2.0 to be combined together, and by adding one JavaScript binding of OpenGL ES 2.0, WebGL can provide hardware 3D accelerated rendering for HTML5 Canvas, so that a Web developer can more smoothly display 3D scenes and models in a browser by means of a device display card, and complicated navigation and data visualization can be created.
A Graphics Processing Unit (GPU), also called a display core, a visual processor, and a display chip, is a microprocessor that is specially used for image and Graphics related operations on a personal computer, a workstation, a game machine, and some mobile devices (such as a tablet computer, a smart phone, etc.).
Shader (also known as Shader): the method is used for realizing image rendering, and can paint or draw something on a screen. The shader replaces the traditional fixed rendering pipeline, and the related calculation of 3D graphics can be realized. Due to its editability, a wide variety of image effects can be achieved without being limited by the fixed rendering pipeline of the graphics card.
According to an aspect of the embodiments of the present invention, an object rendering method is provided, optionally, as an optional implementation manner, the object rendering method may be applied, but not limited, to an object rendering system in a hardware environment as shown in fig. 1, where the object rendering system may include, but is not limited to, a terminal device 102, a network 104, a server 106, and a database 108. A target client (for example, a game client as shown in fig. 1) for displaying a virtual scene is running in the terminal device 102. The terminal device 102 includes a human-computer interaction screen, a processor and a memory. The human-computer interaction screen is used for displaying each virtual object appearing in a virtual game scene (as shown in fig. 1, the virtual game scene is a shooting game scene); and the system is also used for providing a human-computer interaction interface to receive human-computer interaction operation for controlling a controlled virtual object in the virtual game scene, wherein the virtual object is to complete a game task set in the virtual game scene. The processor is used for responding the human-computer interaction operation to generate an interaction instruction and sending the interaction instruction to the server. The memory is used for storing relevant attribute data, such as object attribute data of a virtual object to be rendered and lighting data required for rendering.
In addition, a processing engine is included in server 106 for performing storage or read operations on database 108. Specifically, the processing engine stores the attribute information corresponding to the marked target object returned by the terminal device 102 in the database 108.
The specific process comprises the following steps: in the client operating in the terminal device 102, a virtual scene provided with the surface light source 10 is displayed as by step S100. Then, as shown in step S102, the determined identification of the target virtual object 11 to be currently rendered is sent to the server 106 through the network 104. The server 106 executes steps S104 to S106 to acquire the illumination distance between the photosensitive position on the target virtual object indicated by the identification and the surface light source, and acquires target illumination data matching the illumination distance in the illumination data set matching the surface light source 10 stored in the database 108. Then, as step S108, the target illumination data is returned to the terminal device 102 through the network 104. After receiving the target illumination data, the terminal device 102 executes step S110 to render the photosensitive position on the target virtual object according to the target illumination data.
It should be noted that, in this embodiment, in a virtual scene configured with a surface light source, a target virtual object to be currently rendered is determined, and an illumination distance between a photosensitive position on the target virtual object and the surface light source is acquired. And acquiring target illumination data matched with the illumination distance in an illumination data set generated for the surface light source in advance, wherein the illumination data set comprises illumination data corresponding to different spatial positions in a virtual scene. And then, rendering the photosensitive position on the target virtual object according to the target illumination data. That is to say, based on the area light source configured in the virtual scene, illumination data irradiated by the area light source is calculated in advance for each spatial position in the virtual scene, and then after a target virtual object to be rendered is determined, the illumination data corresponding to the photosensitive position is directly pulled from the existing illumination data, and rendering is performed based on the illumination data. And spherical distribution calculation does not need to be carried out on all photosensitive positions on the target virtual object, so that the calculation amount is greatly saved, the object rendering process is simplified, and the effect of improving the rendering efficiency is achieved. And further the problem of high rendering complexity of the object rendering method provided by the related technology is solved.
Optionally, in this embodiment, the terminal device may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: mobile phones (such as Android phones, iOS phones, etc.), notebook computers, tablet computers, palm computers, MID (Mobile Internet Devices), PAD, desktop computers, smart televisions, etc. The target client may be a video client, an instant messaging client, a browser client, an educational client, etc. that supports displaying a virtual scene configured with a surface light source. Such networks may include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
Optionally, as an optional implementation manner, as shown in fig. 2, the object rendering method includes:
s202, determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source;
it should be noted that, because the point light sources used in the related art illuminate the virtual scene, the edge light intensities of the point light sources decrease, and when the light is irradiated onto the surface of the light receiving object, the bright and dark apertures appear, and because their respective apertures overlap, the brightness of the light receiving surface will be uneven. Therefore, in the embodiment of the present application, in a virtual scene configured with a surface light source, target illumination data corresponding to each photosensitive position on a light-receiving target virtual object is found based on an illumination data set obtained by calculating different spatial positions in advance, so that a rendering process of the target virtual object is rapidly completed, and the object rendering efficiency is improved.
Alternatively, in the present embodiment, the surface light source may be, but is not limited to, a planar light source having a polygonal shape, such as a strip light, a square light, a hexagonal light, and the like. In addition, in the present embodiment, the target virtual object may be, but not limited to, an object in the virtual scene that supports light sensing and reflection of light emitted by the surface light source to display an image, such as a virtual character, a virtual animal, a virtual object, a virtual building, a virtual vehicle, and the like. The above is an example, and this is not limited in this embodiment.
S204, acquiring an illumination distance between a photosensitive position on the target virtual object and the surface light source;
it should be noted that the target virtual object here is an object occupying a certain space volume in the virtual scene, and a position (i.e., a photosensitive position) in a partial region on the surface of the target virtual object will be irradiated by the surface light source to display an illumination effect.
For example, as shown in fig. 3, it is assumed that a surface light source 302 is disposed in a virtual scene, and the surface light source 302 here is three rectangular surface light sources, and different colors are disposed respectively. Here, the surface of the target virtual object 304 in the virtual scene adjacent to the side of the area light source 302 will display a bright surface effect after being exposed to light due to being irradiated, and as shown in fig. 3, the left surface is a dark surface, and the right surface is a bright surface.
Alternatively, in the present embodiment, the illumination distance between the photosensitive position on the target virtual object and the surface light source may be, but is not limited to, the shortest connecting line distance from the photosensitive position to the surface light source.
S206, acquiring target illumination data matched with an illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in a virtual scene;
optionally, in this embodiment, the illumination data set may be, but is not limited to, illumination data calculated in advance based on the illumination of the area light source to each spatial position in the virtual scene, where the illumination data is used to indicate rendering data required for the spatial position to display the sensitization effect. The target illumination data matched with the photosensitive position on the target virtual object is searched in the illumination data set, which may be but is not limited to the illumination distance according to the photosensitive position relative to the surface light source. That is, a mapping relationship between spatial positions corresponding to different illumination distances and corresponding illumination data will be recorded in the illumination data set.
And S208, rendering the photosensitive position on the target virtual object according to the target illumination data.
Optionally, in this embodiment, the target virtual object may be, but is not limited to, rendered based on a shader. The shader is a function for drawing things on a screen, and the function runs in a GPU of a display card in the terminal equipment. The process of rendering can be completed by writing a custom shading program using a programming Language used by the Shader, i.e., Graphics Library Shader Language (GLSL). The shader may be, but is not limited to, a rendering tool that performs rendering in a three-dimensional drawing engine by using a predetermined drawing protocol, and is configured to calculate information about pixels at each photosensitive position on a light-receiving object according to a shader program, and then render the calculation result onto a screen.
For example, assume that the content shown in fig. 4 is a rendering process of a shader that employs the WebGL drawing protocol. When the light-receiving object (such as a target virtual object) is illuminated, a corresponding shader program is operated to modify the color value of a surface pixel of the light-receiving object based on the WebGL protocol according to the attribute of light emitted by the light source, so that the illumination effect of the light-receiving object after being illuminated by the light source is simulated. The method comprises the steps of drawing through a vertex shader and a fragment shader in different shading programs, calculating color values of pixels in a GPU, and outputting calculation results to a screen.
Optionally, in this embodiment, the shader may be, but not limited to, a shader retrievarelightuniformmsb preset in a three-dimensional engine (e.g., Threejs) running in a browser, which employs a shader program written according to a linear Transformed cosine algorithm (LTC) algorithm, and the shader program may be used to approximately simulate an illumination effect of a Bidirectional Reflection Distribution Function (BRDF), that is, an effect of illuminating an object surface. The color attributes of all pixels in the light receiving area on the outer surface of the object can be changed through the shader, and therefore the illumination effect of the light source irradiating on the area is obtained.
The shader, namely the rctarearlightuniformsllib, is very long in code, and mainly encapsulates an LTC conversion matrix, and the runtime shader performs corresponding linear transformation on light according to the conversion matrix to project the light onto the surface of an object, and then modifies color values of all pixels in a projection area according to attributes of the light. In addition, planar light is projected onto the surface of an object inside the rectearlight, all pixels in the projection area are drawn based on the rectearlight uniformmsb shader, and the shader program recalculates the color value of each pixel and renders the calculation result on the screen in real time, so that the user can see the effect of planar light on the object.
The BRDF is used to define the radiance in a given incident direction versus the radiance in a given exit direction. It describes the distribution of incident light reflected by a surface in each exit direction, i.e. the reflection of light seen by the human eye.
For example, a laser emitter emits light vertically downward onto a table top, a bright spot can be seen on the table top, and then the bright spot is observed from different directions, and the brightness of the bright spot is found to change along with different observation directions. The observation direction of eyes is kept unchanged, only the direction of laser emission is changed, and the brightness of the bright point can be observed to change continuously. This is because the object surface possesses different reflectivities due to the combination of different angles of incidence and reflection of light. That is to say, the BRDF defines a method for calculating the brightness of light reflected from the object material surface to the viewpoint after the light in the scene is irradiated on the object material surface, and is a generation algorithm for the surface color of the object after the light is irradiated. For example, as shown in FIG. 5, n is the normal vector, wiAs vector of incident light, woFor the outgoing ray vector, the solution at w is obtained by the BRDFiAnd woOutgoing ray w under angle combinationoThe luminance of (1).
However, the BRDF algorithm uses a spherical distribution function, and when a polygonal area light source is used for rendering (i.e., when an object is illuminated by light, color values of pixels on the surface of the object are calculated), the BRDF needs to be integrated into a polygonal area covered by the light source, as shown in fig. 6, a computer is needed to solve a spherical equation when the polygonal area light source illuminates the surface of the object, and due to the large calculation amount, a challenge is brought to real-time rendering, so that the smoothness of rendering the object in the image is reduced.
In order to overcome the difficulty of real-time computation, in this embodiment, the LTC algorithm (also referred to as linear mapping) may be used, but not limited to, to approximately simulate the BRDF algorithm, so as to reduce the computation overhead, and the result may be computed in real time on the premise of ensuring the smoothness of the image. The LTC algorithm here is a special mapping between two vector spaces that holds vector addition and scalar multiplication. The linear transformation is to map all points in the same coordinate system into another coordinate system of vector space according to some transformation rule. For example, as shown in fig. 7. The point x is transformed to the T (x) position of the value range by the function T, and all points of the "definition range" in the figure can be mapped to the "value range" by the T transformation. In this embodiment, the LTC algorithm may be used to solve the illumination rendering problem of the projection area formed by the polygonal area light source on the surface of the object, so that the spherical equation for calculating the BRDF is simplified to a calculation linear equation, and the calculation complexity is reduced.
In this embodiment, illumination data formed by the area light source irradiating each spatial position is pre-calculated based on the LTC algorithm, so as to obtain an illumination data set. Therefore, when a light-receiving object (namely the target virtual object) is hit by the light of the surface light source, the corresponding illumination data can be directly pulled from the illumination data set based on the corresponding illumination distance to complete photosensitive rendering, so that the target virtual object can quickly and efficiently present the illumination rendering effect irradiated by the surface light source.
Through the embodiment provided by the application, in the virtual scene configured with the surface light source, the target virtual object to be rendered at present is determined, and the illumination distance between the photosensitive position on the target virtual object and the surface light source is obtained. And acquiring target illumination data matched with the illumination distance in an illumination data set generated for the surface light source in advance, wherein the illumination data set comprises illumination data corresponding to different spatial positions in a virtual scene. And then, rendering the photosensitive position on the target virtual object according to the target illumination data. That is to say, based on the area light source configured in the virtual scene, illumination data irradiated by the area light source is calculated in advance for each spatial position in the virtual scene, and then after a target virtual object to be rendered is determined, the illumination data corresponding to the photosensitive position is directly pulled from the existing illumination data, and rendering is performed based on the illumination data. And spherical distribution calculation does not need to be carried out on all photosensitive positions on the target virtual object, so that the calculation amount is greatly saved, the object rendering process is simplified, and the effect of improving the rendering efficiency is achieved. And further the problem of high rendering complexity of the object rendering method provided by the related technology is solved.
As an optional scheme, in a virtual scene configured with a surface light source, determining a target virtual object to be currently rendered includes:
and S1, when the light rays emitted by the surface light source are projected to the virtual scene, determining the virtual object hit by the light rays as a target virtual object, and determining the surface area projected by the light rays on the target virtual object as a photosensitive area, wherein the photosensitive area comprises a photosensitive position.
The description is made with reference to the example shown in fig. 8: assuming that a surface light source shown on the right side in fig. 8 is arranged in the virtual scene, ray detection is performed based on light emitted from the surface light source, and illumination data which is irradiated to each spatial position in the space provided by the virtual scene is calculated based on the detection result. For example, as shown in fig. 8, the positions of the light rays at different illumination distances form different illumination surfaces (e.g., areas filled with oblique lines in the figure), and the illumination data is calculated at each position on the illumination surfaces.
When the light hits the target virtual object as the light receiving object, the hit surface area, for example, a white area of the target virtual object in fig. 8 is determined as a light sensing area, a position in the light sensing area is a light sensing position, then, the illumination data required for rendering the light sensing position at this position is found out from the pre-calculated illumination data according to the illumination distance, and the color value on the pixel point of the light sensing position is adjusted and rendered according to the found illumination data, so as to present the illumination effect after the light sensing position is illuminated by the surface light source.
Through the embodiment provided by the application, when light emitted by the surface light source is projected to a virtual scene, the virtual object hit by the light is determined as the target virtual object, and the surface area projected by the light on the target virtual object is determined as the photosensitive area, so that illumination data corresponding to the photosensitive position in the photosensitive area can be obtained for rendering, and the illumination effect of the target virtual object after being irradiated by the surface light source can be accurately presented.
As an alternative, the obtaining of the illumination distance between the photosensitive position on the target virtual object and the surface light source includes:
and S1, traversing each photosensitive position in the photosensitive area as the current photosensitive position, and executing the following operations:
s11, calculating the distance between the current photosensitive position and the position of a target point in the surface light source, wherein the position of the target point is the point position pointed by the shortest connecting line from the current photosensitive position to the surface light source;
and S12, determining the distance as the illumination distance.
The description is made with reference to the example shown in fig. 9: each photosensitive position in the photosensitive region (such as a white region of the target virtual object shown in the figure) is traversed, and the illumination distance is calculated by taking the photosensitive position as the current photosensitive position.
If the photosensitive position a is taken as an example, a target point position a 'corresponding to the photosensitive position a in the surface light source is determined, a shortest connecting line distance L between the target point position a' and the photosensitive position a is calculated, and then the distance L is determined as an illumination distance between the target point position a and the photosensitive position a, so that illumination data required by rendering of the photosensitive position is searched in an illumination data set based on the illumination distance.
Through the embodiment provided by the application, the distance between the photosensitive position on the target virtual object and the position of the target point in the surface light source is calculated to serve as the illumination distance for finding out the corresponding illumination data from the illumination data set. Therefore, data required by rendering the light-receiving object can be quickly and accurately obtained, and the efficiency of rendering the light-receiving object is improved.
As an alternative, rendering the photosensitive position on the target virtual object according to the target illumination data includes:
s1, extracting an image rendering parameter value corresponding to the target spatial position from the target illumination data, where the image rendering parameter value includes: color value and transparency;
s2, according to the image rendering parameter values, adjusting the rendering parameter values of each pixel on the photosensitive position corresponding to the target space position to obtain adjusted rendering parameter values;
and S3, performing picture rendering according to the adjusted rendering parameter value.
For example, rendering is performed by taking a shader, which is preset in a three-dimensional engine (e.g., threads) in a browser, as an example. The linear transformation matrix M encapsulated in the shader performs corresponding linear transformation processing on the light emitted by the surface light source, so that the light is projected onto the surface of an object, and then modifies the color value and transparency of each pixel in the projection area (i.e. the photosensitive area on the target virtual object) in combination with the attribute information (such as light source color, light source shape and area, light source intensity, etc.) of the surface light source: DataTexture (LTC _ MAT _1, LTC _ MAT _ 2). Color values (Red Green Blue, namely, RGB values) and transparency of each pixel point are recorded in LTC _ MAT _1, and normal vector and position coordinate information of each pixel point are recorded in LTC _ MAT _ 2.
It should be noted that each pixel point on the target virtual object is configured with an original rendering parameter value when not being irradiated by the surface light source. In this embodiment, when it is detected that the surface area of the target virtual object is irradiated by the configured surface light source, an image rendering parameter value of a pixel point at a photosensitive position in a photosensitive area formed by projection is determined according to the calculation process, so as to adjust the original rendering parameter value, and obtain an adjusted rendering parameter value.
According to the embodiment provided by the application, after the image rendering parameter value corresponding to the target space position at present is extracted from the target illumination data, the rendering parameter value of each pixel on the photosensitive position corresponding to the target space position is adjusted according to the image rendering parameter value, and the adjusted rendering parameter value is obtained; and performing picture rendering according to the adjusted rendering parameter value. And complex rendering calculation is not required to be carried out on the photosensitive position, so that the effect of improving the rendering efficiency is achieved.
As an optional scheme, before determining a target virtual object to be currently rendered in a virtual scene configured with a surface light source, the method further includes:
s1, constructing an illumination equation corresponding to each space position based on the illumination result of each point light source on the surface light source irradiating each space position in the virtual scene;
s2, carrying out linear transformation solving calculation on the illumination equation to obtain illumination data corresponding to the spatial position;
and S3, constructing an illumination data set based on the illumination data corresponding to each spatial position.
Optionally, in this embodiment, in step S1, constructing, based on illumination results of the point light sources on the surface light source illuminating to the spatial positions in the virtual scene, an illumination equation corresponding to each spatial position includes:
s11, sequentially taking the space positions as the current coloring point positions, and determining a current illumination equation corresponding to the current coloring point positions;
s12, traversing each point light source in the surface light source, and taking each point light source as a current point light source in sequence;
s12-1, under the condition that the current point light source is not the last point light source in the area light source, determining the current light vector of the current point light source irradiating the current coloring point position and the current normal vector corresponding to the current coloring point position;
s12-2, calculating a current included angle between the current light ray vector and the current normal vector;
s12-3, acquiring a linear transformation matrix packaged in a shader, wherein element parameter values in the linear transformation matrix are parameter values of which error values obtained through multiple training are located in a target interval;
s12-4, constructing a current object illumination equation of the current point light source irradiating the current coloring point position based on the current light vector, the linear conversion matrix and the cosine value of the current included angle;
and S12-5, performing integral calculation on the object illumination equation corresponding to each point light source to construct the current illumination equation.
The description is made with specific reference to the following examples:
the illumination calculation formula of the gloss reflection of the surface light source here is assumed as follows:
Figure BDA0003223099230000141
wherein S is a surface light source region, S is a point on the surface light source region, p is a coloring point, npIs the normal to the point of coloration, wiIs a vector of s to p, θpIs npAnd wiThe included angle of (a).
And the illumination calculation formula of the diffuse reflection of the surface light source is as follows:
Figure BDA0003223099230000142
wherein, thetapIs the normal n of s point on the surface light sourcesAnd-wiThe included angle of (a). Equation (1) is similar to equation (2), but the gloss reflection adds complex BRDF and the color L of the texture on the surface light source.
The simplified processing of the complex illumination equation described above can be done here by a linear transformation matrix, since cos (θ)s) And f (p, w)i,wo) Both are spherical distribution functions, and thus both can be expressed according to the following formula:
f(p,wi,wo)≈M*cos(θs) (3)
wherein M is a linear transformation matrix, i.e. M is a linear transformation matrixSay either f (p, w)i,wo) One must find an M transformation matrix to convert it to cos (θ)s). The linear transformation here is to multiply the incident vector by a matrix M, i.e.:
Figure BDA0003223099230000151
wherein, wi=Mwo,wo=M-1wi
Figure BDA0003223099230000152
Is Jacobian, which is a normalization operation performed after the area changes after the linear transformation. This term is also pre-computed in advance as the M matrix and is conveniently sampled in the texture.
The M matrix here may be, but is not limited to, a matrix satisfying a condition with a sufficiently small error may be found as the M matrix by traversing all matrices. For example, a matrix M is initialized randomly by using a gradient descent-like idea, and then an optimized gradient is obtained through calculation (i.e., optimization is adjusted to obtain a modified matrix M), and the operation is repeated until the error falls within a specified value range.
Substituting the above formula (4) into formula (1) based on the above constructed illumination equation to obtain:
Figure BDA0003223099230000161
taking the above formula (5) as an illumination equation constructed for the surface light source in this embodiment, and then performing linear solution calculation on the equation to obtain rendering parameter values required for rendering, such as color values and transparency.
According to the embodiment provided by the application, the illumination data required by rendering is calculated and obtained through the illumination equation established for the surface light source through the linear transformation algorithm, spherical calculation is not needed, the rendering calculation amount is greatly reduced, and therefore the purpose of improving the rendering efficiency is achieved.
As an optional scheme, before determining a target virtual object to be currently rendered in a virtual scene configured with a surface light source, the method further includes:
s1, setting light emitting attribute information of the surface light source in the virtual scene, wherein the light emitting attribute information comprises at least one of the following: the color of the surface light source, the illumination intensity of the surface light source, and the light emitting area of the surface light source.
It should be noted that the area light source in this embodiment may be, but is not limited to, pre-configure its light emission attribute information, and further, calculate the illumination data of each spatial position in the virtual scene by combining this information.
For example, the area light source is exemplified by a planar light source, rectreasurelight, packaged by a Threejs engine, which is a planar light source implemented by using a rectreasurelight uniformmsb shader, and can uniformly emit light from a rectangular plane. The color, intensity and light emitting area of light can be set to the light source using the rectearlight, and the display effect of the surface light source can be as shown in fig. 10 to 11.
The Threejs engine also comprises an auxiliary device RectAreaLighthelper preset for the area light source, and the main function is to use a frame to wrap the RectAreaLight light source to make the RectAreaLight light source look more like a plane light source, and simultaneously can limit the light emitting surface, such as only one surface emits light in the case, and the other surface does not have light.
The rendering process in the embodiment of the present application is fully described with reference to the flow shown in fig. 11 below:
in step S1202, the color, intensity, and light emitting area of the surface light source are set as required by the item.
In step S1204, a shader program packaged with the LTC algorithm is executed.
Wherein the following steps are implemented in the shader:
and S1204-1, projecting the light emitted by the surface light source to the object surface of the light receiving object to form a projection area.
S1204-2, extracting each pixel in the projection area.
And S1214-3, modifying the color value of each pixel to obtain the adjusted new color.
And S1214-4, rendering the adjusted new color on a screen in real time.
The flow shown in fig. 12 is an example, and the step procedure in the embodiment is not limited at all.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiment of the present invention, there is also provided an object rendering apparatus for implementing the object rendering method. As shown in fig. 13, the apparatus includes:
a first determining unit 1302, configured to determine, in a virtual scene configured with a surface light source, a target virtual object to be currently rendered;
a first obtaining unit 1304 for obtaining an illumination distance between a photosensitive position on the target virtual object and the surface light source;
a second obtaining unit 1306, configured to obtain, in an illumination data set matched with the area light source, target illumination data matched with an illumination distance, where the illumination data set includes illumination data corresponding to different spatial positions in the virtual scene;
a rendering unit 1308, configured to render the photosensitive position on the target virtual object according to the target illumination data.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
As an alternative, the first determining unit 1302 includes:
the first determining module is used for determining a virtual object hit by light rays as a target virtual object when the light rays emitted by the surface light source are projected to a virtual scene, and determining a surface area of the light rays projected on the target virtual object as a photosensitive area, wherein the photosensitive area comprises a photosensitive position.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
As an alternative, the first obtaining unit 1304 includes:
the first processing module is used for traversing each photosensitive position in the photosensitive area, respectively serving as a current photosensitive position, and executing the following operations:
s1, calculating the distance between the current photosensitive position and the position of a target point in the surface light source, wherein the position of the target point is the point position pointed by the shortest connecting line from the current photosensitive position to the surface light source;
and S2, determining the distance as the illumination distance.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
As an alternative, the rendering unit 1308 includes:
an extraction module, configured to extract an image rendering parameter value currently corresponding to a target spatial position from the target illumination data, where the image rendering parameter value includes: color value and transparency;
the adjusting module is used for adjusting the rendering parameter value of each pixel on the photosensitive position corresponding to the target space position according to the image rendering parameter value to obtain an adjusted rendering parameter value;
and the rendering module is used for rendering the picture according to the adjusted rendering parameter value.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
As an optional scheme, the method further comprises the following steps:
the system comprises a first construction unit, a second construction unit and a control unit, wherein the first construction unit is used for constructing an illumination equation corresponding to each space position on the basis of an illumination result of each point light source on the surface light source irradiating each space position in a virtual scene before a target virtual object to be rendered at present is determined in the virtual scene configured with the surface light source;
the calculation unit is used for performing linear transformation solving calculation on the illumination equation to obtain illumination data corresponding to the spatial position;
and the second construction unit is used for constructing an illumination data set based on the illumination data corresponding to each spatial position.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
As an alternative, the first building element comprises:
the second determining module is used for sequentially taking the space positions as the current coloring point positions and determining a current illumination equation corresponding to the current coloring point positions;
the second processing module is used for traversing each point light source in the surface light source and taking each point light source as a current point light source in sequence;
under the condition that the current point light source is not the last point light source in the surface light source, determining the current light vector of the current point light source irradiating the current coloring point position and the current normal vector corresponding to the current coloring point position;
calculating a current included angle between a current light ray vector and a current normal vector;
acquiring a linear transformation matrix packaged in a shader, wherein element parameter values in the linear transformation matrix are parameter values of which error values obtained through multiple times of training are located in a target interval;
constructing a current object illumination equation of the current point light source irradiating the current coloring point position based on the current light vector, the linear conversion matrix and the cosine value of the current included angle;
and performing integral calculation on the object illumination equation corresponding to each point light source to construct the current illumination equation.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
As an optional scheme, the method further comprises the following steps:
the setting unit is used for setting light emitting attribute information of the surface light source in a virtual scene before a target virtual object to be rendered currently is determined in the virtual scene configured with the surface light source, wherein the light emitting attribute information comprises at least one of the following: the color of the surface light source, the illumination intensity of the surface light source, and the light emitting area of the surface light source.
In this embodiment, reference may be made to the above method embodiment for implementing the embodiments of each module unit, which are not described herein again.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the object rendering method, where the electronic device may be a terminal device or a server shown in fig. 1. The present embodiment takes the electronic device as a terminal device as an example for explanation. As shown in fig. 14, the electronic device comprises a memory 1402 and a processor 1404, the memory 1402 having stored therein a computer program, the processor 1404 being arranged to execute the steps of any of the method embodiments described above by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source;
s2, acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
s3, acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in the virtual scene;
and S4, rendering the photosensitive position on the target virtual object according to the target illumination data.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 14 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 14, or have a different configuration than shown in FIG. 14.
The memory 1402 may be configured to store software programs and modules, such as program instructions/modules corresponding to the object rendering method and apparatus in the embodiments of the present invention, and the processor 1404 executes various functional applications and data processing by running the software programs and modules stored in the memory 1402, that is, implementing the object rendering method. Memory 1402 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1402 may further include memory located remotely from the processor 1404, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1402 may specifically include, but is not limited to, information such as illumination data for different locations. As an example, as shown in fig. 14, the memory 1402 may include, but is not limited to, a first determining unit 1302, a first obtaining unit 1304, a second obtaining unit 1306, and a rendering unit 1308 of the object rendering apparatus. In addition, other module units in the object rendering apparatus may also be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmitting device 1406 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1406 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1406 is a Radio Frequency (RF) module, which is used to communicate with the internet by wireless means.
In addition, the electronic device further includes: a display 1408 for displaying the virtual scene and presenting an illumination effect of the target virtual object illuminated by the surface light source; and a connection bus 1410 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the object rendering method. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source;
s2, acquiring the illumination distance between the photosensitive position on the target virtual object and the surface light source;
s3, acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in the virtual scene;
and S4, rendering the photosensitive position on the target virtual object according to the target illumination data.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An object rendering method, comprising:
determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source;
acquiring an illumination distance between a photosensitive position on the target virtual object and the surface light source;
acquiring target illumination data matched with the illumination distance in an illumination data set matched with the surface light source, wherein the illumination data set comprises illumination data corresponding to different spatial positions in the virtual scene;
rendering the photosensitive position on the target virtual object according to the target illumination data.
2. The method of claim 1, wherein determining a target virtual object to be currently rendered in a virtual scene configured with a surface light source comprises:
when the light emitted by the surface light source is projected to the virtual scene, the virtual object hit by the light is determined as the target virtual object, and the surface area of the light projected on the target virtual object is determined as a photosensitive area, wherein the photosensitive area comprises the photosensitive position.
3. The method of claim 2, wherein obtaining an illumination distance between a photosensitive location on the target virtual object and the area light source comprises:
traversing each photosensitive position in the photosensitive area, respectively serving as a current photosensitive position, and executing the following operations:
calculating the distance between the current photosensitive position and the position of a target point in the surface light source, wherein the position of the target point is the position pointed by the shortest connecting line from the current photosensitive position to the surface light source;
determining the distance as the illumination distance.
4. The method of claim 1, wherein rendering the photosensitive location on the target virtual object in accordance with the target lighting data comprises:
extracting an image rendering parameter value currently corresponding to a target space position from the target illumination data, wherein the image rendering parameter value comprises: color value and transparency;
according to the image rendering parameter values, adjusting the rendering parameter values of all pixels on the photosensitive position corresponding to the target space position to obtain adjusted rendering parameter values;
and rendering the picture according to the adjusted rendering parameter value.
5. The method of claim 1, wherein before determining a target virtual object to be currently rendered in a virtual scene configured with a surface light source, the method further comprises:
constructing an illumination equation corresponding to each space position on the basis of an illumination result of each point light source on the surface light source irradiating each space position in the virtual scene;
performing linear transformation solving calculation on the illumination equation to obtain illumination data corresponding to the spatial position;
and constructing the illumination data set based on the illumination data corresponding to each spatial position.
6. The method of claim 5, wherein constructing the illumination equation corresponding to each spatial position based on the illumination result of each point light source on the surface light source illuminating each spatial position in the virtual scene comprises:
sequentially taking each space position as a current coloring point position, and determining a current illumination equation corresponding to the current coloring point position;
traversing each point light source in the surface light source, and taking each point light source as a current point light source in sequence;
under the condition that the current point light source is not the last point light source in the surface light source, determining a current light vector of the current point light source irradiated on the current coloring point position and a current normal vector corresponding to the current coloring point position;
calculating a current included angle between the current light ray vector and the current normal vector;
acquiring a linear transformation matrix packaged in a shader, wherein element parameter values in the linear transformation matrix are parameter values of which error values obtained through multiple training are located in a target interval;
constructing a current object illumination equation of the current point light source irradiating the current coloring point position based on the current light vector, the linear conversion matrix and the cosine value of the current included angle;
and performing integral calculation on the object illumination equation corresponding to each point light source to construct the current illumination equation.
7. The method according to any one of claims 1 to 6, wherein before determining a target virtual object to be currently rendered in a virtual scene configured with a surface light source, the method further comprises:
setting light emission attribute information of the surface light source in the virtual scene, wherein the light emission attribute information comprises at least one of the following: the color of the surface light source, the illumination intensity of the surface light source and the light emitting area of the surface light source.
8. An object rendering apparatus, comprising:
the device comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for determining a target virtual object to be rendered currently in a virtual scene configured with a surface light source;
a first acquisition unit configured to acquire an illumination distance between a photosensitive position on the target virtual object and the surface light source;
a second obtaining unit, configured to obtain, in an illumination data set matched with the area light source, target illumination data matched with the illumination distance, where the illumination data set includes illumination data corresponding to different spatial positions in the virtual scene;
and the rendering unit is used for rendering the photosensitive position on the target virtual object according to the target illumination data.
9. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
CN202110963740.5A 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment Active CN113648652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110963740.5A CN113648652B (en) 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110963740.5A CN113648652B (en) 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113648652A true CN113648652A (en) 2021-11-16
CN113648652B CN113648652B (en) 2023-11-14

Family

ID=78491984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110963740.5A Active CN113648652B (en) 2021-08-20 2021-08-20 Object rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113648652B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393499A (en) * 2022-08-11 2022-11-25 广州极点三维信息科技有限公司 3D real-time rendering method, system and medium
WO2023087911A1 (en) * 2021-11-19 2023-05-25 腾讯科技(深圳)有限公司 Data processing method and device and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346918A (en) * 2011-09-30 2012-02-08 长春理工大学 Method for drawing three-dimensional animation scene only containing object change
CN102819860A (en) * 2012-08-16 2012-12-12 北京航空航天大学 Real-time global illumination method for sub-surface scattering object on the basis of radiosity
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN106056658A (en) * 2016-05-23 2016-10-26 珠海金山网络游戏科技有限公司 Virtual object rendering method and virtual object rendering device
US20170018256A1 (en) * 2015-07-17 2017-01-19 Abl Ip Holding Llc Systems and methods to provide configuration data to a software configurable lighting device
CN107452048A (en) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 The computational methods and device of global illumination
CN109364481A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 Real-time global illumination method, apparatus, medium and electronic equipment in game
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN111632378A (en) * 2020-06-08 2020-09-08 网易(杭州)网络有限公司 Illumination map making method, game model rendering method, illumination map making device, game model rendering device and electronic equipment
CN112755535A (en) * 2021-02-05 2021-05-07 腾讯科技(深圳)有限公司 Illumination rendering method and device, storage medium and computer equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346918A (en) * 2011-09-30 2012-02-08 长春理工大学 Method for drawing three-dimensional animation scene only containing object change
CN102819860A (en) * 2012-08-16 2012-12-12 北京航空航天大学 Real-time global illumination method for sub-surface scattering object on the basis of radiosity
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
US20170018256A1 (en) * 2015-07-17 2017-01-19 Abl Ip Holding Llc Systems and methods to provide configuration data to a software configurable lighting device
CN106056658A (en) * 2016-05-23 2016-10-26 珠海金山网络游戏科技有限公司 Virtual object rendering method and virtual object rendering device
CN107452048A (en) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 The computational methods and device of global illumination
CN109364481A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 Real-time global illumination method, apparatus, medium and electronic equipment in game
CN109364486A (en) * 2018-10-30 2019-02-22 网易(杭州)网络有限公司 The method and device of HDR rendering, electronic equipment, storage medium in game
CN111632378A (en) * 2020-06-08 2020-09-08 网易(杭州)网络有限公司 Illumination map making method, game model rendering method, illumination map making device, game model rendering device and electronic equipment
CN112755535A (en) * 2021-02-05 2021-05-07 腾讯科技(深圳)有限公司 Illumination rendering method and device, storage medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
过 洁, 潘金贵: "复杂面光源下实时绘制研究", 《系统仿真学报》, vol. 24, no. 1, pages 6 - 11 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023087911A1 (en) * 2021-11-19 2023-05-25 腾讯科技(深圳)有限公司 Data processing method and device and readable storage medium
CN115393499A (en) * 2022-08-11 2022-11-25 广州极点三维信息科技有限公司 3D real-time rendering method, system and medium

Also Published As

Publication number Publication date
CN113648652B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN108564646B (en) Object rendering method and device, storage medium and electronic device
US11024077B2 (en) Global illumination calculation method and apparatus
WO2022116759A1 (en) Image rendering method and apparatus, and computer device and storage medium
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN112116692A (en) Model rendering method, device and equipment
JP5839907B2 (en) Image processing apparatus and image processing method
CN113648652B (en) Object rendering method and device, storage medium and electronic equipment
CN101923462A (en) FlashVR-based three-dimensional mini-scene network publishing engine
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
KR20240001021A (en) Image rendering method and apparatus, electronic device, and storage medium
CN113674389A (en) Scene rendering method and device, electronic equipment and storage medium
Ganovelli et al. Introduction to computer graphics: A practical learning approach
US20230368459A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
US9704290B2 (en) Deep image identifiers
Toisoul et al. Accessible GLSL Shader Programming.
CN105976423B (en) A kind of generation method and device of Lens Flare
CN113648655B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN111949904A (en) Data processing method and device based on browser and terminal
US9626791B2 (en) Method for representing a participating media in a scene and corresponding device
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
CN116543094A (en) Model rendering method, device, computer readable storage medium and electronic equipment
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
Feinstein HLSL Development Cookbook
Dirksen Learn Three. js: Program 3D animations and visualizations for the web with JavaScript and WebGL
CN113350786A (en) Skin rendering method and device for virtual character and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40054541

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant