CN115120970A - Baking method, baking device, baking equipment and storage medium of virtual scene - Google Patents

Baking method, baking device, baking equipment and storage medium of virtual scene Download PDF

Info

Publication number
CN115120970A
CN115120970A CN202210470156.0A CN202210470156A CN115120970A CN 115120970 A CN115120970 A CN 115120970A CN 202210470156 A CN202210470156 A CN 202210470156A CN 115120970 A CN115120970 A CN 115120970A
Authority
CN
China
Prior art keywords
illumination
probe
illumination probe
probes
position point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210470156.0A
Other languages
Chinese (zh)
Inventor
周际翔
金小刚
陈彦臻
李元亨
寇启龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210470156.0A priority Critical patent/CN115120970A/en
Publication of CN115120970A publication Critical patent/CN115120970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a baking method, a baking device, baking equipment and a storage medium of a virtual scene, and belongs to the technical field of computers. When the virtual scene is baked, determining a communication relation vector of the plurality of illumination probes based on the communication relation among the plurality of illumination probes. And determining a connected relation vector group of the position points based on the connected relation between the position points and the adjacent illumination probes in the virtual scene. And baking the virtual scene based on the connected relation vector group of the position points, so that the connected relation vector group of the position points is bound with the position points in advance. Because the connected relation vector group of the position points can reflect the connected relation between the position points and the adjacent illumination probes, the subsequent connected relation vector group based on the position points can obtain more accurate illumination information when being rendered, thereby improving the true degree of illumination.

Description

Baking method, baking device, baking equipment and storage medium of virtual scene
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for baking a virtual scene.
Background
With the development of multimedia technology, more and more game types and more functions are provided. In order to provide a more realistic game experience for the player, the technician aims to improve the fineness of the game picture, for example, the technician improves the fineness of the game picture as a whole by improving the reality of the illumination in the game picture.
In the related art, an illumination effect in a game picture is often processed by using illumination Probes (Light Probes), that is, a plurality of illumination Probes are arranged in a game scene, when a certain position point in the game scene is baked, the illumination probe adjacent to the position point is recorded, and thus, during subsequent rendering, interpolation is directly performed on the recorded illumination information of the illumination probe to obtain the illumination information of the position point.
However, in some cases, two illumination probes adjacent to a certain position point in a game scene may be located in different virtual spaces, for example, one illumination probe is located outdoors, the other illumination probe is located indoors, and the indoors and outdoors are not connected, so that the illumination information obtained by directly interpolating the illumination information of the two illumination probes is not accurate enough, resulting in a low illumination truth of the position point.
Disclosure of Invention
The embodiment of the application provides a baking method, a baking device, baking equipment and a storage medium for a virtual scene, and the baking effect of the virtual scene can be improved. The technical scheme is as follows:
in one aspect, a method for baking a virtual scene is provided, the method including:
determining a communication relation vector of a plurality of illumination probes based on a communication relation among the plurality of illumination probes in a virtual scene;
determining a connected relation vector group of the position points based on the connected relation between the position points in the virtual scene and the illumination probes adjacent to the position points, wherein the connected relation vector group comprises connected relation vectors of the illumination probes adjacent to and connected with the position points;
and baking the virtual scene based on the connected relation vector group of the position points.
In one aspect, a baking apparatus for a virtual scene is provided, the apparatus comprising:
the communication relation vector determining module is used for determining communication relation vectors of the plurality of illumination probes based on the communication relation among the plurality of illumination probes in the virtual scene;
a connected relation vector group determination module, configured to determine a connected relation vector group of a position point based on a connected relation between the position point in the virtual scene and an illumination probe adjacent to the position point, where the connected relation vector group includes a connected relation vector of the illumination probe adjacent to and connected to the position point;
and the baking module is used for baking the virtual scene based on the communication relation vector group of the position points.
In a possible embodiment, the communication relation vector determining module is configured to divide the plurality of illumination probes into a plurality of illumination probe sets based on a communication relation between the plurality of illumination probes in the virtual scene, where the illumination probes in each illumination probe set are communicated with each other; and determining the communication relation vectors of the illumination probes in the plurality of illumination probe groups, wherein the communication relation vectors of the illumination probes in the same illumination probe group are the same, and the communication relation vectors of the illumination probes in different illumination probe groups are orthogonal to each other.
In a possible implementation, the connected relation vector determination module is configured to perform any one of:
for a first illumination probe and a second illumination probe in the plurality of illumination probes, dividing the first illumination probe and the second illumination probe into the same illumination probe group under the condition that a connecting line between the first illumination probe and the second illumination probe is not shielded by any virtual object;
and under the condition that a connecting line between the first illumination probe and the second illumination probe is shielded by any virtual object, dividing the first illumination probe and the second illumination probe into different illumination probe groups.
In a possible implementation, the connected relation vector determination module is configured to perform any one of:
for a first illumination probe and a second illumination probe in the plurality of illumination probes, under the condition that a connecting line between the first illumination probe and the second illumination probe is not shielded by any virtual object or is only shielded by a first type of virtual object, dividing the first illumination probe and the second illumination probe into the same illumination probe group, wherein the first type of virtual object is a transparent virtual object in the virtual scene;
and under the condition that the connecting line between the first illumination probe and the second illumination probe is shielded by any second virtual object, dividing the first illumination probe and the second illumination probe into different illumination probe groups, wherein the second virtual object is an opaque virtual object in the virtual scene.
In a possible embodiment, the connectivity vector determining module is configured to assign initial connectivity vectors to the illumination probes in the plurality of illumination probe sets based on the relative positional relationships between the plurality of illumination probe sets, the illumination probes in each illumination probe set have the same initial connectivity vector, and the illumination probes in any two adjacent and unconnected illumination probe sets in the plurality of illumination probe sets have different initial connectivity vectors; and optimizing the initial communication relation vectors of the illumination probes in the plurality of illumination probe groups by adopting a simulated annealing method to obtain the communication relation vectors of the illumination probes in the plurality of illumination probe groups.
In a possible implementation, the connected relation vector group determining module is configured to perform any one of:
in the case that the position point is communicated with the illumination probe adjacent to the position point, adding the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point;
and under the condition that the position point is not communicated with the illumination probe adjacent to the position point, not adding the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point.
In a possible embodiment, the apparatus further comprises:
and the communication relation determining module is used for carrying out ray detection on the illumination probes adjacent to the position point based on the position point and determining the communication relation between the position point and the adjacent illumination probes.
In a possible implementation manner, the communication relation determining module is configured to emit a ray to an illumination probe adjacent to the position point, with the position point as a starting point; determining that the position point is not communicated with an adjacent illumination probe under the condition that the ray is in contact with any virtual object; in the case where the ray does not contact any virtual object, it is determined that the location point is in communication with an adjacent illumination probe.
In a possible implementation manner, the communication relation determining module is configured to emit a ray to an illumination probe adjacent to the position point, with the position point as a starting point; under the condition that the ray is not in contact with any virtual object or is only in contact with a first type of virtual object in the virtual scene, determining that the position point is communicated with an adjacent illumination probe, wherein the first type of virtual object is a transparent virtual object in the virtual scene; and under the condition that the ray is in contact with any second type of virtual object in the virtual scene, determining that the position point is not communicated with the adjacent illumination probe, wherein the second type of virtual object is an opaque virtual object in the virtual scene.
In a possible embodiment, the baking module is configured to, for a static virtual object in the virtual scene, bake the static object based on a connected relationship vector group of a plurality of location points on the static virtual object and an illumination map of the static virtual object; and baking the dynamic object in the virtual scene based on the connected relation vector group of the plurality of position points on the dynamic object.
In a possible implementation, the number of the location points is plural, and the apparatus further includes:
and the position point merging module is used for merging the same connected relation vector group in the connected relation vector groups of the position points to obtain a connected relation vector group after the position points are merged, and the merged connected relation vector group corresponds to at least one position point in the position points.
In a possible embodiment, the apparatus further comprises:
and the rendering module is used for rendering the position points based on the connected relation vector group of the position points and the textures of the illumination probes.
In a possible implementation, the rendering module is configured to determine, based on the set of connected relation vectors of the location points, at least one target illumination probe connected to the location point from among the illumination probes adjacent to the location point; sampling the texture of the at least one target illumination probe at the position point to obtain illumination information of the at least one target illumination probe at the position point; and fusing the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point.
In a possible implementation, the rendering module is configured to perform any one of:
adding the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point;
and carrying out weighted summation on the illumination information of the at least one target illumination probe at the position point based on the illumination weight between the at least one target illumination probe and the position point to obtain the target illumination information of the position point, wherein the illumination weight is in negative correlation with the distance between the target illumination probe and the position point.
In one aspect, a computer device is provided, comprising one or more processors and one or more memories, having stored therein at least one computer program, which is loaded and executed by the one or more processors to implement a baking method of the virtual scenery.
In one aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, which is loaded and executed by a processor to implement a baking method of the virtual scene.
In one aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising program code, the program code being stored in a computer-readable storage medium, the program code being read by a processor of a computer device from the computer-readable storage medium, the program code being executed by the processor such that the computer device performs the baking method of the virtual scene.
According to the technical scheme provided by the embodiment of the application, when the virtual scene is baked, the communication relation vectors of the multiple illumination probes are determined based on the communication relation among the multiple illumination probes. And determining a connected relation vector group of the position points based on the connected relation between the position points and the adjacent illumination probes in the virtual scene, wherein the connected relation vectors stored in the connected relation vector group are the connected relation vectors of the illumination probes connected with the corresponding position points, namely the connected relation vectors of the illumination probes which can influence the illumination of the position points. And baking the virtual scene based on the connected relation vector group of the position points, so that the connected relation vector group of the position points is bound with the position points in advance. Because the connected relation vector group of the position points can reflect the connected relation between the position points and the adjacent illumination probes, the subsequent connected relation vector group based on the position points can obtain more accurate illumination information when being rendered, thereby improving the true degree of illumination.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a baking method for a virtual scene according to an embodiment of the present application;
fig. 2 is a flowchart of a baking method for a virtual scene according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another baking method for a virtual scene according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a communication relationship between an illumination probe and a location point according to an embodiment of the present disclosure;
FIG. 5 is a comparison diagram of rendering effects provided by an embodiment of the present application;
FIG. 6 is a comparison diagram of rendering effects provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a baking apparatus for a virtual scene according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data is required to comply with relevant laws and regulations and standards in relevant countries and regions.
Phantom 4 Engine (Unreal Engine 4): compared with other engines, the game development engine provided by Epic Games corporation is efficient and comprehensive in unreal engine, and can directly preview development effect, so that developers are endowed with stronger ability.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiments of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to an object controlled by a user in a virtual scene.
Virtual object: the moving or static objects in the virtual scene are referred to, and the moving virtual objects include animals, vehicles, characters and the like in the virtual scene. The static virtual objects include walls, rocks, and the ground in the virtual scene.
An illumination probe: information about light rays passing through the white space of the scene may be captured and used. Similar to the lighting map, the lighting probe stores "baking" information about the lighting in the scene. The difference is that the illumination map stores illumination information about the light striking the surface in the scene, while the illumination probe stores information about the light passing through the empty space in the scene.
Light sticking: the method is used for adding the fusion of the illumination texture on the basis of the texture rendering of the original object model in the virtual scene, so that the object model renders the illumination effect. Illumination mapping is a technique that can enhance the illumination effect of static scenes.
Greedy Algorithm (Greedy Algorithm): under a certain standard, the sample which best meets the standard is considered preferentially, and the sample which does not meet the standard is considered finally, so that an answer algorithm is obtained finally. In other words, when solving the problem, the best choice is always made as seen at the present time. Instead of considering the global optimal solution from the global optimal, the local optimal solution in a sense is considered.
And (3) simulating an annealing method: the Simulated Annealing (SA) is derived from the solid Annealing principle and is a probability-based algorithm. The algorithm idea is as follows: starting from a higher initial temperature, the temperature is gradually reduced until the temperature is reduced to meet the heat balance condition. And (3) performing n rounds of search at each temperature, adding random disturbance to the old solution to generate a new solution during each round of search, and receiving the new solution according to a certain rule.
Fig. 1 is a schematic diagram of an implementation environment of a baking method for a virtual scene according to an embodiment of the present application, and referring to fig. 1, the implementation environment may include: a first terminal 110, a second terminal 120, a server 130 and a communication network 140.
The first terminal 110 is provided with a first application program that bakes the virtual scene. Illustratively, the first application includes a first graphics engine, and the first graphics engine can be used in the development process of the virtual scene. Optionally, the graphics engine includes Unity3D, a ghost engine, a frost engine, and the like, which are not limited herein. The first terminal 110 includes various types of terminal devices such as a mobile phone, a tablet computer, a desktop computer, and a laptop computer.
The second terminal 120 is installed and operated with a second application program supporting rendering of the virtual scene. Illustratively, the second application includes a second graphics engine, and the second graphics engine can be used for the running display process of the virtual scene. Optionally, the first graphics engine and the second graphics engine may be the same graphics engine, or may be different application versions of the same graphics engine (e.g., the first graphics engine is a developer version, and the second graphics engine is an application running version), which is not limited herein. The second application program may be any one of Game programs such as a virtual reality application program, a three-dimensional map program, a First-Person Shooting (FPS) Game, a Third-Person Shooting (TPS) Game, a Multiplayer Online tactical sports Game (MOBA), a Massively Multiplayer Online Role Playing Game (MMORPG), and a Multiplayer Battle survival Game. The user controls the master virtual object located in the virtual scene through the second terminal 120. The second terminal 120 includes various types of terminal devices such as a mobile phone, a tablet computer, a desktop computer, and a laptop computer.
The server 130 is configured to provide backend services for the first application and/or the second application, such as providing backend data computing support for the first application and backend application logic support for the second application. Optionally, the server 130 undertakes primary computational work and the first terminal 110 and the second terminal 120 undertakes secondary computational work; alternatively, the server 130 undertakes the secondary computing work, and the first terminal 110 and the second terminal 120 undertake the primary computing work; or, the server 130, the first terminal 110, and the second terminal 120 perform cooperative computing by using a distributed computing architecture.
It should be noted that the server 130 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data, and an artificial intelligence platform. In some embodiments, the server 130 described above may also be implemented as a node in a blockchain system. The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like.
After the description of the implementation environment of the embodiment of the present application, an application scenario of the embodiment of the present application will be described below with reference to the above implementation environment, in the following description, a terminal is also the first terminal 110 in the above implementation environment, and a server is also the server 130 in the above implementation environment. The technical scheme provided by the embodiment of the application can be applied to the scene of game production and the scene of animation production.
When the technical scheme provided by the embodiment of the application is applied to a scene of game making, after the game making personnel arrange the illumination probe in the virtual scene, the terminal executes the technical scheme provided by the embodiment of the application to bake the virtual scene, and then the baked virtual scene can be rendered with a better illumination effect.
It should be noted that, in the above description process, the terminal executes the technical solution provided in the embodiment of the present application as an example, and in other possible implementations, the technical solution provided in the embodiment of the present application can also be executed by the server, that is, the server bakes the virtual scene by using the technical solution provided in the embodiment of the present application, which is not limited in the embodiment of the present application.
After the implementation environment and the application scenario of the embodiment of the present application are introduced, a method for baking a virtual scenario provided in the embodiment of the present application is described below, with reference to fig. 2, taking an execution subject as an example, and the method includes the following steps.
201. The terminal determines a communication relation vector of the plurality of illumination probes based on a communication relation among the plurality of illumination probes in the virtual scene.
Wherein, Light Probe (Light Probe) is arranged in detecting the Light distribution condition of virtual Light source in virtual scene, and virtual Light source provides the source of Light for in the virtual scene, and in some embodiments, above-mentioned virtual Light source is including the Light source that can form illumination effects such as some Light, parallel Light, spotlight. One or more virtual light sources corresponding to the virtual scene may be provided, which is not limited in this application embodiment. In some embodiments, the connection relation vectors of any two illumination probes which are not connected in the plurality of illumination probes are orthogonal to each other, and the connection relation vectors of any two illumination probes which are connected in the plurality of illumination probes are not orthogonal to each other, so that whether the illumination probes are connected before can be determined by whether the connection relation vectors of the illumination probes are orthogonal to each other.
202. The terminal determines a communication relation vector group of the position point based on the position point in the virtual scene and the communication relation between the illumination probes adjacent to the position point, wherein the communication relation vector group comprises the communication relation vectors of the illumination probes adjacent to and communicated with the position point.
The method for determining the communication relationship between the position point in the virtual scene and the illumination probe is the same as the method for determining the communication relationship between the position point in the virtual scene and the illumination probe. The communication relation vector group of the position point is used for recording the communication relation vector of the illumination probe adjacent to and communicated with the position point, so that the illumination probe adjacent to and communicated with the position point can be quickly determined through the communication relation vector group of the position point in the subsequent rendering process, and the efficiency is high.
203. And baking the virtual scene by the terminal based on the connected relation vector group of the position point.
The process of baking the virtual scene is also the process of preprocessing the virtual scene, and the amount of calculation of other subsequent processes can be reduced and the efficiency of other subsequent processes can be improved by preprocessing the virtual scene.
Through the technical scheme provided by the embodiment of the application, when the virtual scene is baked, the communication relation vectors of the plurality of illumination probes are determined based on the communication relation among the plurality of illumination probes. And determining a connected relation vector group of the position points based on the connected relation between the position points and the adjacent illumination probes in the virtual scene, wherein the connected relation vectors stored in the connected relation vector group are connected relation vectors of the illumination probes connected with the corresponding position points, namely the connected relation vectors of the illumination probes which can influence the illumination of the position points. And baking the virtual scene based on the connected relation vector group of the position points, so that the connected relation vector group of the position points is bound with the position points in advance. Because the connected relation vector group of the position points can reflect the connected relation between the position points and the adjacent illumination probes, the subsequent connected relation vector group based on the position points can obtain more accurate illumination information when being rendered, thereby improving the true degree of illumination.
The above steps 201-203 are simple descriptions of the technical solutions provided by the embodiments of the present application, and the technical solutions provided by the embodiments of the present application will be more clearly described below with reference to some examples, and with reference to fig. 3, taking an execution main as an example, the method includes:
301. the terminal determines a plurality of light probes in the virtual scene.
In a possible implementation manner, the terminal generates a plurality of illumination probes in the virtual scene at a target distance, the illumination probes are used for detecting light distribution conditions of the virtual light source in the virtual scene, that is, illumination information, and in some embodiments, the illumination probes use spherical harmonics to store the illumination information. In some embodiments, the virtual scene is a static virtual scene, and the geometry in the virtual scene does not change.
The plurality of light irradiation probes refers to two or more light irradiation probes.
In this embodiment, the plurality of illumination probes generated by the terminal in the virtual scene are uniformly distributed in the virtual scene, so that a relatively fine illumination display effect can be formed.
In some embodiments, the number of the plurality of illumination probes in the virtual scene is positively correlated with the number of the virtual objects in the virtual scene, that is, the greater the number of the virtual objects in the virtual scene, the greater the number of the generated illumination probes; the fewer the number of virtual objects in the virtual scene, the fewer the number of illumination probes generated. With this embodiment, when the number of virtual objects in the virtual scene is large, the illumination effect of the virtual objects is refined by generating a large number of illumination probes. When the number of the virtual objects in the virtual scene is small, the light probes with small number are generated to reduce the computation amount of baking and rendering the virtual scene, and the baking and rendering efficiency is improved. In some embodiments, the terminal divides the virtual scene into a plurality of virtual spaces, the terminal generates the illumination probes in the plurality of virtual spaces based on the number of virtual objects in the plurality of virtual spaces, and the number of illumination probes in each virtual space is positively correlated with the number of virtual objects in the virtual space. By dividing the virtual scene into a plurality of virtual spaces, the illumination probes can be generated more delicately according to the number of virtual objects, so that a better illumination display effect is realized.
In one possible implementation, the terminal determines a shadow point of the virtual object in the virtual scene, where the shadow point is a target point of a shadow area formed by the virtual object on a side facing away from the virtual light source when the virtual light source in the virtual scene irradiates the virtual object. The terminal generates a plurality of illumination probes in the virtual scene based on the shadow points of the virtual objects in the virtual scene.
In some embodiments, the target point of the shadow region is a midpoint of the shadow region, which is the geometric center of the shadow region. Or, the target point of the shadow area is a point on the boundary of the shadow area, which is not limited in the embodiment of the present application.
In this embodiment, the terminal can generate the illumination probes according to the shadow points of the virtual objects, so that the number and the distribution of the illumination probes are matched with the number and the distribution of the virtual objects in the virtual scene, and the number of the illumination probes is reduced on the premise of ensuring the illumination quality as much as possible.
For example, the terminal determines a bounding box of a virtual object in a virtual scene, and emits a light ray to the virtual object in the virtual scene with a virtual light source in the virtual scene as a starting point, an area surrounded by intersection points of the light ray and the bounding box is a shadow area of the virtual object, and a terminal of the shadow area is a shadow point of the virtual object. The terminal is above the shadow point of the virtual object to generate an illumination probe.
It should be noted that the terminal can generate the illumination probe in the virtual scene through any one of the above manners, or generate the illumination probe by using other methods, which is not limited in this application example, and in some embodiments, the above implementation is implemented by the first graphics engine. In some embodiments, the technician can also manually add an illumination probe in the virtual scene through the first graphics engine.
It should be noted that the step 301 is an optional step, and the terminal may determine a plurality of illumination probes in the illumination probes by executing the step 301, or may directly execute the following step 302 when a plurality of illumination probes have been generated in the virtual scene, which is not limited in this embodiment of the present invention.
302. The terminal determines a connection relation vector of the plurality of illumination probes based on the connection relation among the plurality of illumination probes in the virtual scene.
The communication relationship between the illumination probes includes two types of communication and non-communication, and the related description of the communication and the non-communication will be described in the following sub-steps. In some embodiments, the communication relationship between the two illumination probes is also referred to as visibility between the illumination probes, and the two illumination probes are communicated, that is, the two illumination probes are visible to each other, that is, at the position of any one of the two illumination probes, the other illumination probe can be seen without being blocked. The connected relation vector is used for representing the connected relation among the plurality of the illumination probes, the connected relation vectors of the two disconnected illumination probes are orthogonal to each other, namely the dot product of the connected relation vectors of the two disconnected illumination probes is 0. The communication relation vectors of the two communicated illumination probes are not orthogonal to each other, that is, the dot product of the communication relation vectors of the two communicated illumination probes is not 0. In the subsequent use process, whether the two illumination probes are communicated or not can be determined by determining whether the communication vectors of the two illumination probes are orthogonal or not, namely, whether the dot product is 0 or not.
The step 302 includes the following sub-steps 3021-3022 or 3023-3024.
3021. The terminal divides the plurality of illumination probes into a plurality of illumination probe sets based on the communication relation among the plurality of illumination probes in the virtual scene, and the illumination probes in each illumination probe set are communicated with each other.
In a possible implementation manner, for a first illumination probe and a second illumination probe of the plurality of illumination probes, in a case that a connection line between the first illumination probe and the second illumination probe is not blocked by any virtual object, the terminal divides the first illumination probe and the second illumination probe into the same illumination probe group. Under the condition that a connecting line between the first illumination probe and the second illumination probe is shielded by any virtual object, the terminal divides the first illumination probe and the second illumination probe into different illumination probe groups. In some embodiments, the illumination probes in the same illumination probe set are also referred to as strongly connected illumination probes.
The connecting line between the first illumination probe and the second illumination probe is a virtual line segment, the virtual line segment is invisible when the virtual scene is rendered, and in the process of baking the virtual scene, a technician can set the line segment to be visible or invisible according to actual needs. In addition, the fact that the connecting line of the first illumination probe and the second illumination probe is shielded by the virtual object means that an intersection point exists between the connecting line and the virtual object in the virtual scene.
In this embodiment, whether the two illumination probes are communicated or not means whether the connection line of the two illumination probes is blocked by a virtual object in the virtual scene or not, and when the connection line of the two illumination probes is blocked by the virtual object in the virtual scene, it is determined that the two illumination probes are not communicated; and determining that the two illumination probes are communicated under the condition that the connecting line of the two illumination probes is not shielded by a virtual object in the virtual scene. By dividing the plurality of illumination probes into a plurality of illumination probe groups, the illumination probes can be subsequently processed in units of illumination probe groups, thereby improving the efficiency of subsequent processing.
For example, the terminal performs radiation detection to the second illumination probe based on the first illumination probe, that is, emits radiation to the second illumination probe with the first illumination probe as a starting point. And under the condition that the ray is not shielded by any virtual object, the terminal divides the first illumination probe and the second illumination probe into the same illumination probe group. Under the condition that the ray is shielded by any virtual object, the terminal divides the first illumination probe and the second illumination probe into different illumination probe groups. In the above description, the terminal performs the radiation detection on the second illumination probe based on the first illumination probe as an example, but in other possible embodiments, the terminal can also perform the radiation detection on the first illumination probe based on the second illumination probe, which is not limited in the embodiment of the present application. In some embodiments, in determining whether a ray is occluded by a virtual object, the terminal can be implemented with the aid of a convex polyhedron enclosing the virtual object, the convex polyhedron not intersecting the enclosed virtual object. Determining that the ray is not occluded by the virtual object when the ray is in contact with a convex polyhedron surrounding any one virtual object; and in the case that the ray is in contact with the convex polyhedron of any virtual object, determining that the ray is blocked by the virtual object. In some embodiments, this method is also referred to as convex hull expansion.
In a possible implementation manner, for a first illumination probe and a second illumination probe of the plurality of illumination probes, under the condition that a connection line between the first illumination probe and the second illumination probe is not shielded by any virtual object or is only shielded by a first type of virtual object, the first illumination probe and the second illumination probe are divided into the same illumination probe group, where the first type of virtual object is a transparent virtual object in the virtual scene. And under the condition that the connecting line between the first illumination probe and the second illumination probe is shielded by any second virtual object, dividing the first illumination probe and the second illumination probe into different illumination probe groups, wherein the second virtual object is an opaque virtual object in the virtual scene.
In this embodiment, whether the two illumination probes are communicated or not means whether the connection line of the two illumination probes is not shielded by the virtual object or is only shielded by the transparent virtual object in the virtual scene, and the two illumination probes are determined to be communicated or not when the connection line of the two illumination probes is not shielded by the virtual object or is only shielded by the transparent virtual object in the virtual scene; and under the condition that the connecting line of the two illumination probes is shielded by an opaque virtual object in the virtual scene, determining that the two illumination probes are not communicated.
For example, the terminal performs radiation detection on the second illumination probe based on the first illumination probe, that is, emits radiation to the second illumination probe with the first illumination probe as a starting point. And under the condition that the ray is not shielded by any virtual object, the terminal divides the first illumination probe and the second illumination probe into the same illumination probe group. Alternatively, the terminal determines the type of the virtual object in case the ray is occluded by the virtual object. And under the condition that the virtual object is a first-class virtual object, the terminal divides the first illumination probe and the second illumination probe into the same illumination probe group. And under the condition that the virtual object is a second-type virtual object, the terminal divides the first illumination probe and the second illumination probe into different illumination probe groups. In some embodiments, in determining whether the ray is occluded by the virtual object, the terminal can be implemented with the aid of a convex polyhedron that encloses the virtual object, the convex polyhedron not intersecting the enclosed virtual object. Determining that the ray is not occluded by the virtual object in the case that the ray contacts a convex polyhedron that surrounds any one of the virtual objects; and in the case that the ray is in contact with the convex polyhedron of any virtual object, determining that the ray is blocked by the virtual object. Accordingly, in the event that the ray contacts the convex polyhedron of the virtual object, the terminal determines the type of virtual object enclosed by the convex body, such as by querying based on the identity of the convex body, determining the identity of the virtual object enclosed by the convex body. And querying the type of the virtual object based on the identification of the virtual object, wherein the type comprises the transparent virtual object or the opaque virtual object.
In one possible embodiment, the terminal divides the virtual scene into a plurality of virtual spaces. The terminal divides the plurality of illumination probes into a plurality of illumination probe groups based on the communication relation among the illumination probes in each virtual space, the illumination probes in the same illumination probe group belong to the same virtual space, and the illumination probes in the same illumination probe group are communicated with each other.
In this embodiment, the terminal can divide the virtual scene into a plurality of virtual spaces, and group the plurality of illumination probes in units of virtual spaces, thereby improving the fineness of grouping the illumination probes. This embodiment may be combined with any of the above embodiments, that is, the method for determining the connectivity between the light probes according to the above embodiment is used in the divided virtual space.
For example, the terminal divides the virtual scene into a plurality of virtual spaces based on the positions of the illumination probes in the virtual scene, and each virtual space includes at least one illumination probe. For any virtual space in the plurality of virtual spaces, the terminal determines the communication relationship between the illumination probes in the virtual space, that is, performs ray detection based on the illumination probes in the virtual space, so as to determine the communication relationship between the illumination probes in the virtual space, and the method for determining the communication relationship between the illumination probes by the terminal refers to the related description in the previous two embodiments, which is not repeated herein. The terminal divides the illumination probes which are communicated with each other in the virtual space into one illumination probe group, and the illumination probes in different illumination probe groups in the same virtual space are not communicated with each other. In a case where there is a virtual space including only one illumination probe among the plurality of virtual spaces, the terminal individually divides the illumination probes in the virtual space into one illumination probe group. Or, in case that there is a virtual space including only one light probe among the plurality of virtual spaces, the terminal merges the virtual space including only one light probe with an adjacent virtual space, thereby reducing the number of virtual spaces.
Through the step 3021, the terminal can reflect the communication relationship between the illumination probes by dividing the illumination probe groups, that is, the illumination probes in the same illumination probe group are communicated with each other, and the illumination probes in different illumination probe groups are not communicated with each other.
3022. And the terminal determines the communication relation vectors of the illumination probes in the plurality of illumination probe groups, the communication relation vectors of the illumination probes in the same illumination probe group are the same, and the communication relation vectors of the illumination probes in different illumination probe groups are orthogonal to each other.
In one possible embodiment, the terminal assigns initial connectivity vectors to the illumination probes in the plurality of illumination probe sets based on the relative positional relationship between the plurality of illumination probe sets, the illumination probes in each illumination probe set have the same initial connectivity vector, and the illumination probes in any two adjacent and non-connected illumination probe sets in the plurality of illumination probe sets have different initial connectivity vectors. And the terminal optimizes the initial communication relation vectors of the illumination probes in the plurality of illumination probe groups by adopting a simulated annealing method to obtain the communication relation vectors of the illumination probes in the plurality of illumination probe groups.
Wherein the relative positional relationship between the illumination probe sets comprises whether the illumination probe sets are adjacent or not adjacent in the virtual scene, and the purpose of assigning the initial connectivity vector based on the relative positional relationship between the illumination probe sets is: and (3) making the illumination probes in the adjacent illumination probe groups have different initial connected relation vectors as much as possible. Accordingly, the initial connectivity vector of the illumination probe sets can reflect the relative positional relationship between the illumination probe sets. The position of the illumination probe group is determined by the position of the illumination probes in the illumination probe group, for example, the terminal determines the average position of the illumination probes in the illumination probe group as the position of the illumination probe group, and the average position is the position of the geometric center of a geometric body enclosed by the illumination probes in the illumination probe group.
In this embodiment, the terminal can allocate an initial connectivity vector to the illumination probe sets according to the relative positional relationship between the illumination probe sets, and further optimize the initial connectivity vector of the illumination probes by a simulated annealing method, so that the obtained connectivity vector of the illumination probes in the plurality of illumination probe sets can more accurately reflect the connectivity between the plurality of illumination probe sets.
In some embodiments, the process of assigning an initial connectivity vector to the illumination probes in the plurality of illumination probe sets based on the relative positional relationship between the plurality of illumination probe sets can be regarded as a process of assigning colors to nodes in a Graph (Graph), wherein the Graph includes a plurality of nodes. The illumination probe sets can be regarded as nodes in a graph, because the illumination probes in each illumination probe set have the same initial communication relation vector, the illumination probes in the illumination probe sets can be represented by the illumination probe sets, the relative position relation between the illumination probe sets is embodied by connecting lines between the nodes, connecting lines exist between adjacent nodes, and connecting lines do not exist between non-adjacent nodes. In this case, it can be converted into a problem of assigning a color vector to a plurality of nodes in the graph, where the initial connected vector corresponds to the initial color vector, and where the color vector corresponds to the connected relation vector.
For example, the terminal generates a first graph based on the relative position relationship between a plurality of illumination probe sets, each node corresponds to one illumination probe set, and a connecting line exists between two nodes to indicate that two illumination probe sets corresponding to the two nodes are adjacent. And the terminal adopts a greedy algorithm to distribute initial color vectors for the plurality of nodes in the first graph, the initial color vectors are used for representing the colors of the nodes in the first graph, and the initial color vectors are equivalent to the initial connected relation vectors. The greedy algorithm aims to ensure that the color vectors of adjacent nodes are different as much as possible, and the color vectors of non-adjacent nodes can be the same. The greedy algorithm is a stepping algorithm, different initial color vectors are distributed to one node and nodes adjacent to the node in each processing process of the terminal, and the optimal result can be obtained in the local part of the graph by using the greedy algorithm. The terminal adopts a simulated annealing method, optimizes the initial color vectors of the nodes based on the connecting lines between the nodes to obtain the color vectors of a plurality of nodes, and the purpose of optimizing the initial color vectors of the nodes by using the simulated annealing method is as follows: under the condition of ensuring that the initial color vectors can reflect the connection relation between the nodes, the types of the initial color vectors are reduced as much as possible, namely, on the premise of ensuring that the initial color vectors of the adjacent nodes are different, the initial color vectors of the non-adjacent nodes can be the same, so that the number of the color vectors is reduced, and the occupation of the storage space is reduced. In the above embodiment, the terminal can obtain a connectivity vector (color vector) of the illumination probes in the plurality of illumination probe sets. When the terminal adopts a simulated annealing method to optimize the initial color vector of the nodes based on the connecting line between the nodes, the optimization can be realized based on the following formula (1).
Figure BDA0003622106780000161
Wherein i and j represent two illumination probes in the same illumination probe group, x is a position point in the virtual scene,
Figure BDA0003622106780000162
for the visibility of the position point x to the illumination probe i, i.e. the communication relationship between the position point x and the illumination probe,
Figure BDA0003622106780000163
for the visibility of the illumination probe j by the position point x, i.e. the communication relationship between the position point x and the illumination probe j,
Figure BDA0003622106780000164
and
Figure BDA0003622106780000165
the value ranges of (A) are all 0-1.
Figure BDA0003622106780000166
For the interpolation coefficient of the position point x to the illumination probe i,
Figure BDA0003622106780000167
the interpolation coefficient is a coefficient of interpolation of the position point x for the illumination probe j that is inversely related to the distance between the position point and the illumination probe, the more distant the position point is from the illumination probe, the less the interpolation coefficient, and in some embodiments, the interpolation coefficient is trilinear interpolated using the coordinate difference.
In some embodiments, the connectivity vector for an illumination probe set has a plurality of components, each component representing the connectivity between the illumination probe set and other illumination probe sets, and the connectivity vector for an illumination probe set refers to the connectivity vector shared by the illumination probes in the illumination probe set. Wherein, the communication relation vector of the illumination probe set is also the communication relation vector shared by the illumination probes in the illumination probe set. In some embodiments, the dimensions of the connectivity vector for the set of illumination probes are the same as the number of connectivity vectors (different sets of illumination probes may share the same connectivity vector). For example, in the case that the number of the connectivity vector is 8, the dimension of the connectivity vector of the illumination probe set is 8, each component of the connectivity vector is used to represent the connectivity between the illumination probe set and other illumination probe sets, and in some embodiments, the value of the component of the connectivity vector is 0 or 1.
In some embodiments, the terminal can store the textures by taking the illumination probe sets as units, that is, the illumination information of the illumination probes in the same illumination probe set is stored in the same texture, and the illumination information of the illumination probes in different illumination probe sets is stored in different textures, so that subsequent calling is facilitated. In some embodiments, in the event that any one of the illumination probes does not have illumination information, pure black is used in place of the illumination information of that illumination probe.
Alternatively, the step 302 can also be realized through the following sub-steps 3023-3024.
3023. And the terminal allocates an initial communication relation vector to the plurality of illumination probes.
In a possible implementation manner, the terminal allocates an initial connectivity vector to the plurality of illumination probes according to a distance between any two illumination probes in the plurality of illumination probes, where any two adjacent illumination probes in the plurality of illumination probes have the same initial connectivity vector, where adjacent refers to an illumination probe whose distance meets a target distance condition, for example, two illumination probes whose distance is less than or equal to a distance threshold are adjacent illumination probes, and the distance threshold is set by a technician according to an actual situation, which is not limited in this application. In step 3023, if the initial connection relation vectors of the two photo probes are the same, it can be indicated that the two photo probes are adjacent to each other, and it cannot be indicated whether the two photo probes are connected to each other.
Under the embodiment, the terminal can rapidly allocate the initial communication relation vectors to the plurality of illumination probes according to the distance between the illumination probes, so that the efficiency of allocating the initial communication relation vectors is improved.
3024. And the terminal optimizes the initial communication relation vectors of the plurality of illumination probes based on the communication relation among the plurality of illumination probes to obtain the communication relation vectors of the plurality of illumination probes.
The purpose of optimizing the initial communication relation vectors of the plurality of illumination probes is to enable the obtained communication relation vectors of the illumination probes to reflect the communication relation among the illumination probes.
In one possible embodiment, based on the connectivity between the plurality of illumination probes, the process of assigning the initial connectivity vector to the plurality of illumination probes can be regarded as a process of assigning colors to nodes in a Graph (Graph), wherein the Graph includes a plurality of nodes. The illumination probes can be regarded as nodes in a graph, the communication relation among the illumination probes is embodied by connecting lines among the nodes, the connecting lines exist among the connected nodes, and the connecting lines do not exist among the disconnected nodes. In this case, it can be converted into a problem of assigning a color vector to a plurality of nodes in the graph, where an initial connected vector corresponds to an initial color vector, and a color vector corresponds to a connected relation vector.
In one possible implementation, the terminal generates a second Graph (Graph) based on the connectivity relationship among the plurality of illumination probes, where the second Graph includes a plurality of nodes, and each node corresponds to one illumination probe. In the second graph, a connecting line exists between two nodes, which indicates that the two nodes are communicated; and no connecting line exists between the two nodes, which means that the two nodes are not connected. And the terminal optimizes the initial color vectors corresponding to the nodes based on the connecting lines between the nodes in the second graph to obtain the color vectors of the nodes, namely the color vectors of the illumination probes.
For example, the terminal generates a second graph based on a connection relationship among a plurality of illumination probes, each node of the second graph corresponds to one illumination probe, a connection line exists between two nodes to indicate that the two illumination probes corresponding to the two nodes are connected, and an initial connection vector of the node corresponding to the illumination probe is also an initial color vector of the node. The terminal adopts a simulated annealing method, optimizes the initial color vectors of the nodes based on the connecting lines between the nodes to obtain the color vectors of a plurality of nodes, and the purpose of optimizing the initial color vectors of the nodes by using the simulated annealing method is as follows: under the condition of ensuring that the initial color vectors can reflect the connection relation between the nodes, the types of the initial color vectors are reduced as much as possible, namely, on the premise of ensuring that the initial color vectors of the connected nodes are different, the initial color vectors of the disconnected nodes can be the same, so that the number of the color vectors is reduced, and the occupation of a storage space is reduced. With the above-described embodiment, the terminal can obtain the connection relation vector (color vector) of the plurality of light probes.
In some embodiments, the connectivity vector for the light probe is stored in a binding relationship with the light probe. Under the condition that a plurality of illumination probes correspond to the same communication relation vector, the communication relation vector is stored only once, and the communication relation vector is set to correspond to the plurality of illumination probes in Russian.
303. The terminal carries out ray detection on the illumination probe adjacent to the position point based on the position point in the virtual scene, and determines the communication relation between the position point and the adjacent illumination probe.
The illumination probe adjacent to the position point refers to an illumination probe whose distance from the position point meets a target distance condition, for example, the distance from the position point is less than or equal to a distance threshold, and the distance threshold is set by a technician according to an actual situation, which is not limited in the embodiment of the present application. Or, the N illumination probes with the smallest distance to the position point are all the illumination probes adjacent to the position point, and N is a positive integer, for example, 8. The communication relationship between the position point and the adjacent illumination probe comprises communication and non-communication. In addition, the number of the location points in the virtual scene is multiple, and for convenience of understanding, in the following description, a terminal is used for processing a first location point in the virtual scene as an example, and a method for processing other location points by the terminal and a method for processing the location point belong to the same inventive concept.
In one possible embodiment, the terminal emits the ray to the illumination probe adjacent to the position point with the position point as a starting point. In the case where the ray contacts any virtual object, the terminal determines that the location point is not in communication with an adjacent illumination probe. In the case where the ray does not contact any virtual object, the terminal determines that the location point communicates with an adjacent illumination probe.
For the sake of understanding, the number of the illumination probes adjacent to the position point is described as one example in the following description.
In this embodiment, whether the position point and the adjacent illumination probe are communicated or not means whether a connection line between the position point and the adjacent illumination probe is blocked by a virtual object in the virtual scene or not, and when the connection line between the position point and the adjacent illumination probe is blocked by the virtual object in the virtual scene, it is determined that the position point and the adjacent illumination probe are not communicated; and under the condition that the connecting line of the position point and the adjacent illumination probe is not shielded by the virtual object in the virtual scene, determining that the position point is communicated with the adjacent illumination probe.
For example, the terminal performs radiation detection on the adjacent illumination probe based on the position point, that is, emits a radiation to the adjacent illumination probe with the position point as a starting point. And under the condition that the ray is not shielded by any virtual object, the terminal determines that the position point is communicated with the adjacent illumination probes. And under the condition that the ray is blocked by any virtual object, the terminal determines that the position point and the adjacent illumination probe are not communicated with each other. The above description has been given by taking an example that the terminal performs the radiation detection on the adjacent illumination probe based on the position point, but in other possible embodiments, the terminal can perform the radiation detection on the position point based on the adjacent illumination probe, and the embodiment of the present application is not limited thereto. In some embodiments, in determining whether the ray is occluded by the virtual object, the terminal can be implemented with the aid of a convex polyhedron that encloses the virtual object, the convex polyhedron not intersecting the enclosed virtual object. Determining that the ray is not occluded by the virtual object in the case that the ray contacts a convex polyhedron that surrounds any one of the virtual objects; and in the case that the ray is in contact with the convex polyhedron of any virtual object, determining that the ray is blocked by the virtual object. In some embodiments, this method is also referred to as convex hull expansion.
In one possible embodiment, the terminal emits the ray to the illumination probe adjacent to the position point with the position point as a starting point. And under the condition that the ray is positioned in any virtual object contact or only contacts with a first type of virtual object in the virtual scene, the terminal determines that the position point is communicated with the adjacent illumination probe, and the first type of virtual object is a transparent virtual object in the virtual scene. And under the condition that the ray is in contact with any second type of virtual object in the virtual scene, the terminal determines that the position point is not communicated with the adjacent illumination probe, and the second type of virtual object is an opaque virtual object in the virtual scene.
In this embodiment, whether the position point is communicated with the adjacent illumination probe means whether a connection line between the position point and the adjacent illumination probe is not blocked by a virtual object or is only blocked by a transparent virtual object in the virtual scene, and the position point is determined to be communicated with the adjacent illumination probe when the connection line between the position point and the adjacent illumination probe is not blocked by a virtual object or is only blocked by a transparent virtual object in the virtual scene; and under the condition that the connecting line of the position point and the adjacent illumination probe is shielded by an opaque virtual object in the virtual scene, determining that the position point is not communicated with the adjacent illumination probe.
For example, the terminal performs radiation detection on the neighboring light probe based on the position point, that is, emits a radiation to the neighboring light probe with the position point as a starting point. And under the condition that the ray is not shielded by any virtual object, the terminal determines that the position point is communicated with the adjacent illumination probes. Alternatively, the terminal determines the type of the virtual object in case the ray is occluded by the virtual object. And under the condition that the virtual object is a first-type virtual object, the terminal determines that the position point is communicated with the adjacent illumination probes. And in the case that the virtual object is a second type of virtual object, the terminal determines that the position point and the adjacent illumination probe are not communicated with each other. In some embodiments, in determining whether a ray is occluded by a virtual object, the terminal can be implemented with the aid of a convex polyhedron enclosing the virtual object, the convex polyhedron not intersecting the enclosed virtual object. Determining that the ray is not occluded by the virtual object when the ray is in contact with a convex polyhedron surrounding any one virtual object; and in the case that the ray is in contact with the convex polyhedron of any virtual object, determining that the ray is blocked by the virtual object. Accordingly, in the event that the ray contacts the convex polyhedron of the virtual object, the terminal determines the type of virtual object that is enclosed by the convex, such as by querying based on the identity of the convex, determining the identity of the virtual object that is enclosed by the convex. And querying the type of the virtual object based on the identification of the virtual object, wherein the type comprises the transparent virtual object or the opaque virtual object.
In some embodiments, the communication relationship (visibility) between the position point and the illumination probe is not discrete (including communication and non-communication), but may also be continuous (using the degree of communication, 0 means completely non-communication, and 1 means completely communication), which is not limited in this application.
304. The terminal determines a communication relation vector group of the position point based on the communication relation between the position point in the virtual scene and the illumination probe adjacent to the position point, wherein the communication relation vector group comprises communication relation vectors of the illumination probes adjacent to and communicated with the position point.
By determining the communication relation vector group of the position point, namely recording the illumination probe adjacent to and communicated with the position point through the communication relation vector group, in the subsequent rendering process of the position point, the illumination probe adjacent to and communicated with the position point is determined through the communication relation vector group of the position point, and then the illumination information of the illumination probe adjacent to and communicated with the position point is sampled for rendering, so that the illumination information of the illumination probe which is not communicated with the position point is prevented from being sampled in the rendering process, and the illumination authenticity of the position point is improved.
In one possible embodiment, in a case where the position point is communicated with the illumination probe adjacent to the position point, the terminal adds the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point.
In this embodiment, the terminal stores the communication relation vector of the illumination probe adjacent to and communicated with the position point through the communication relation vector group, and the illumination probe adjacent to and communicated with the position point can be quickly determined through the communication relation vector group in the subsequent rendering process, so that the rendering efficiency is improved.
In some embodiments, the connected relation vector group is also referred to as Mask, the Mask is divided into a Mask Map (Mask Map) and a Mask Volume (Mask Volume), the Mask Map is for a static virtual object in the virtual scene, the Mask Volume is for a dynamic virtual object in the virtual scene, and the terminal determines the type of the Mask according to whether the position point is located on the dynamic virtual object or the static virtual object in the virtual scene.
For example, when the position point belongs to a static virtual object in a virtual scene, and when the position point is communicated with an adjacent illumination probe, the terminal adds a communication relation vector of the illumination probe adjacent to the position point to the Mask Map of the position point. And under the condition that the position point belongs to a dynamic object in the virtual scene and the position point is communicated with the adjacent illumination probe, the terminal adds the communication relation vector of the illumination probe adjacent to the position point into the Mask Volume of the position point.
In one possible embodiment, in a case that the position point is not communicated with the illumination probe adjacent to the position point, the terminal does not add the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point.
In some embodiments, in the case where the connectivity (visibility) between the location point and the illumination probe is continuous (expressed with a connectivity parameter, 0 means completely disconnected, and 1 means completely connected), the terminal can further perform the following processing:
in a possible implementation manner, in a case that a connection parameter between the location point and the illumination probe adjacent to the location point is greater than or equal to a connection parameter threshold, the terminal adds the connection relationship vector of the adjacent illumination probe and the connection parameter between the adjacent illumination probe and the location point to the connection relationship vector group of the location point at the same time, where the connection parameter threshold is set by a technician according to an actual situation, and this is not limited in this application. And under the condition that the communication parameter between the position point and the illumination probe adjacent to the position point is smaller than the threshold value of the communication parameter, the terminal does not add the communication relation vector of the adjacent illumination probe and the communication parameter between the adjacent illumination probe and the position point into the communication relation vector group of the position point at the same time. Or, the terminal directly adds the communication parameters and the communication relation vectors of the illumination probes adjacent to the position point into the communication relation vector group of the position point at the same time, and determines whether to sample or not based on the communication parameters in the subsequent rendering process, so that the flexibility of the method is improved.
Alternatively, the terminal can record the illumination probes adjacent to and connected with the position point in addition to determining the connected relation vector group of the position point through the step 304 and recording the illumination probes adjacent to and connected with the position point by using the connected relation vector group.
In a possible implementation manner, the terminal determines the communication relation vector of the position point based on the communication relation vector of the illumination probe adjacent to and connected with the position point and the communication relation vector of the illumination probe adjacent to and not connected with the position point, wherein the communication relation vector of the position point is orthogonal to the communication relation vector of the illumination probe adjacent to and not connected with the position point, and the communication relation vector of the illumination probe adjacent to and not connected with the position point is not orthogonal to each other. During subsequent rendering, based on the communication relation vector of the position point and the communication relation vector of the illumination probe adjacent to the position point, the illumination probe adjacent to and communicated with the position point and the illumination probe adjacent to and not communicated with the position point can be determined from the illumination probes adjacent to the position point.
Optionally, after step 304, the terminal can also perform the following steps.
In a possible implementation manner, the terminal combines the same set of connectivity vectors in the set of connectivity vectors of the plurality of location points to obtain a combined set of connectivity vectors of the plurality of location points, where the combined set of connectivity vectors corresponds to at least one location point in the plurality of location points.
In the virtual scene, the number of position points is large, and the illumination probes corresponding to adjacent position points may be the same, in which case, the terminal can combine the communication relationship vector groups corresponding to the position points of the same illumination probe, thereby reducing the number of the storage communication relationship vector groups.
For example, the terminal combines the same connectivity vector in the connectivity vector groups of the plurality of location points into one connectivity vector group. And the terminal adopts pointer (indication) texture to indicate the corresponding relation between the combined connected relation vector group and the position point. For example, the terminal scans the Mask Volume according to M × M data blocks, merges the data blocks with the same data into one data block, constructs a compressed data block Atlas, and uses an index (index) of the data block in the Atlas to store the texture. The method can compress the size of the Mask Volume to be less than 10% of the original size, so that the overhead of the Mask Volume can be accepted.
305. And the terminal cures the virtual scene based on the connected relation vector group of the position points.
In a possible implementation manner, for a static virtual object in the virtual scene, the terminal bakes the static object based on the connected relationship vector group of the multiple position points on the static virtual object and the illumination map of the static virtual object.
The process of baking the static object, namely the process of baking the connected relation vector group of a plurality of position points on the static object and the illumination chartlet to the surface of the static object, can reduce the calculation amount during subsequent rendering through baking, and therefore the rendering efficiency is improved. In some embodiments, the above embodiment is a process in which the terminal bakes the Mask Map of the static object and the lighting Map together onto the surface of the static object.
In one possible implementation manner, for a dynamic object in the virtual scene, the terminal bakes the dynamic object based on the connected relation vector group of the plurality of position points on the dynamic object.
The process of baking the dynamic object, namely the process of baking the connected relation vector group of the plurality of position points on the dynamic object to the surface of the dynamic object, can reduce the computation amount in the subsequent rendering process by baking, and therefore the rendering efficiency is improved. In some embodiments, the above embodiment is a process in which the terminal bakes the Mask Volume of the dynamic object to the surface of the dynamic object, where the dynamic object refers to an object that moves freely in the virtual scene.
It should be noted that, in the process of describing the steps 301-305, the step 301-305 is performed by using a terminal as an example, in other possible embodiments, the step 301-305 may also be performed by a server, for example, the server is a cloud baking platform, and the server can bake the virtual scene by performing the step 301-305.
Optionally, after the step 305, the terminal can further execute the following step 306, it should be noted that the step 306 is a rendering process of the virtual scene, and the terminal executing the step 306 may be the same terminal as the step 301 and 305, or may be other terminals, for example, as described in a real-time environment, the first terminal executes the step 301 and 305, and the second terminal executes the step 306, or the following step 306 is executed by a server, and the embodiment of the present application is not limited to the execution subject.
306. And rendering the plurality of position points by the terminal based on the connected relation vector group of the position points and the textures of the plurality of illumination probes.
In a possible implementation, the terminal determines at least one target illumination probe communicating with the location point from the illumination probes adjacent to the location point based on the set of connectivity relation vectors of the location point. And the terminal samples the texture of the at least one target illumination probe at the position point to obtain the illumination information of the at least one target illumination probe at the position point. And the terminal fuses the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point.
The position Point is also called a Render Point (Render Point) in the rendering process.
In this embodiment, the terminal can quickly determine the target illumination probe through the connected relation vector group of the position point, so that the rendering is performed based on the illumination information of the target illumination probe.
In order to more clearly explain the above embodiment, the above embodiment will be explained in three parts.
The first part, the terminal, determines at least one target illumination probe communicating with the position point from the illumination probes adjacent to the position point based on the set of communication relation vectors of the position point.
In a possible implementation manner, the terminal determines an illumination probe adjacent to the position point and a connected relation vector of the illumination probe in the virtual scene. And the terminal compares the communication relation vector in the communication relation vector group of the position point with the illumination probes adjacent to the position point and the communication relation vector of the illumination probes, and determines at least one target illumination probe from the illumination probes adjacent to the position point. That is, in the case where the set of connectivity vectors of any one of the light probes adjacent to the location point is stored in the set of connectivity vectors of the location point, the light probe is determined as a target light probe of the location point. In the case that the connectivity vector of any one of the light probes adjacent to the position point is not stored in the connectivity vector set of the position point, the light probe is not the target light probe of the position point.
In a possible embodiment, the set of connectivity relation vectors of the location point further includes a connectivity parameter between the location point and the neighboring illumination probes, and the terminal determines at least one target illumination probe from the illumination probes neighboring to the location point based on the connectivity parameter. For example, for the illumination probe adjacent to the location point, in the case that the connectivity parameter of the adjacent illumination probe is greater than or equal to the connectivity parameter threshold, the terminal determines the adjacent illumination probe as the target illumination probe. And under the condition that the communication parameter of the adjacent illumination probe is smaller than the threshold value of the communication parameter, the terminal does not determine the adjacent illumination probe as the target illumination probe. Since the connected parameter threshold is set by the technician according to the actual situation, the technician can change the illumination of the location point by adjusting the connected parameter threshold.
And the second part samples the texture of the at least one target illumination probe at the position point by the terminal to obtain the illumination information of the at least one target illumination probe at the position point.
The illumination information is used for reflecting the illumination color, the intensity and other information of the target illumination probe at the position point. In sampling the texture of the target illumination probe, it may be performed by considering the target illumination probe as one light source.
In some embodiments, the illumination information of at least one target illumination probe adjacent to the location point is stored in the same texture, and the terminal performs 1 hardware-accelerated tri-linear interpolation sampling from the texture to obtain the illumination information of the at least one target illumination probe at the location point, which greatly reduces the number of sampling times.
And the third part and the terminal fuse the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point.
In a possible implementation manner, the terminal adds the illumination information of the at least one target illumination probe at the location point to obtain the target illumination information of the location point.
In this embodiment, the terminal directly adds the illumination information of at least one target illumination probe at the position point to obtain the target illumination information of the position point, and the speed is high and the efficiency is high.
In a possible implementation manner, the terminal performs a weighted summation on the illumination information of the at least one target illumination probe at the location point based on an illumination weight between the at least one target illumination probe and the location point, so as to obtain the target illumination information of the location point, where the illumination weight is inversely related to a distance between the target illumination probe and the location point.
In this embodiment, the terminal directly performs weighted summation on the illumination information of the at least one target illumination probe at the location point to obtain the target illumination information of the location point, and the accuracy of the target illumination information of the location point is high.
In some embodiments, in the case that the location point is located in a dynamic object in the virtual scene, the terminal is further capable of projecting the dynamic object onto a virtual ground, and rendering the location point on the dynamic object using a set of connected relation vectors corresponding to the virtual ground. Therefore, when the position of the dynamic object changes, the connected relation vector group can be obtained in time, and a better illumination effect is obtained.
It should be noted that, in the above step 306, the rendering of one position point in the virtual scene is taken as an example for explanation, and a method for rendering other position points in the virtual scene by the terminal and a method for rendering the position point belong to the same inventive concept, and the implementation process is not described again.
The following describes the technical solution provided by the embodiment of the present application with reference to fig. 4 and the above steps 301-306.
Referring to fig. 4, during rendering, for a location point 401 in a virtual scene, four light probes, namely a light probe 402, a light probe 403, a light probe 404, and a light probe 405, exist around the location point. Wherein, illumination probe 402, illumination probe 403 and illumination probe 404 belong to same illumination probe group, and illumination probe 405 belongs to another illumination probe group, and two illumination probe groups are separated by virtual wall 406 in the virtual scene, and that is, communicate each other between illumination probe 402, illumination probe 403 and the illumination probe 404, and illumination probe 402, illumination probe 403 and illumination probe 404 all do not communicate with illumination probe 405. The position point 401 is communicated with the illumination probe 402, the illumination probe 403 and the illumination probe 404, and then the communication relation vector group (Mask) of the position point 401 also includes the communication relation vectors (color vectors) of the illumination probe 402, the illumination probe 403 and the illumination probe 404. When rendering the position point 401 subsequently, the illumination information of the illumination probe 402, the illumination probe 403 and the illumination probe 404 at the position point 401 is sampled, and the illumination information of the illumination probe 405 is not sampled.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
When the scheme in the related technology is adopted, the problem of serious light leakage exists, and the light leakage is reflected in that an undesirable light spot is generated in a dark place and an obvious and unnatural local dimming is generated in a bright place on the rendering result. For example, the virtual scene includes a virtual wall, one side of the virtual wall is an outdoor space, and has a virtual light source with higher brightness, and the other side of the virtual wall is an indoor space, and has no virtual light source. In the case where there is light leakage, there is a light spot that should not appear on the indoor side of the virtual wall, and an unnatural local dimming also appears on the outdoor side of the virtual wall. Through the technical scheme provided by the embodiment of the application, the phenomenon of light leakage in the virtual scene can be eliminated.
In the experimental process, two terminals are used, wherein the terminal 1 carries an i9-9900K processor and an RTX3070 display card, the terminal 2 carries an Ryzen 71800X processor and a GTX1080ti display card, and the two terminals are both provided with 32GB system memory. As can be seen from table 1 below, the time overhead of the solution provided in the embodiment of the present application is close to the baseline light leakage GI (Global illumination), and is significantly lower than the light leakage (a contrast solution) solution of the GI based on visibility.
TABLE 1
Figure BDA0003622106780000261
Figure BDA0003622106780000271
Referring to fig. 5, the top row is the final rendering result, the middle row is the illumination result without post-processing, and the bottom row is the difference between the illumination result and the annotation image. In fig. 5, (a) is the original edition algorithm, and it can be seen that there is significant light leakage on both the ceiling and the floor; (b) rendering results for rendering using RTXGI (a contrast method); (c) and (d) are rendering results obtained by adopting the technical scheme provided by the embodiment of the application, wherein (c) only the geometric information is used for dividing the illumination probe, and the difference of the illumination information is also considered in the step (d); (e) is a light leakage free result rendered using the illumination map.
It can be seen that the technical scheme provided by the embodiment of the application can effectively inhibit light leakage, and the accuracy of final rendering is superior to that of the RTXGI algorithm.
Fig. 6 shows an effect of the technical solution provided by the embodiment of the present application on suppressing light leakage of a dynamic object, where in fig. 6, a is an effect diagram of not adopting the technical solution provided by the embodiment of the present application, b is an effect diagram of adopting the technical solution provided by the embodiment of the present application, c is a comparison diagram of local details, an upper portion in c is an effect diagram of not adopting the technical solution provided by the embodiment of the present application, and a lower portion is an effect diagram of adopting the technical solution provided by the embodiment of the present application.
According to the technical scheme provided by the embodiment of the application, when the virtual scene is baked, the communication relation vectors of the multiple illumination probes are determined based on the communication relation among the multiple illumination probes. And determining a connected relation vector group of the position points based on the connected relation between the position points and the adjacent illumination probes in the virtual scene, wherein the connected relation vectors stored in the connected relation vector group are the connected relation vectors of the illumination probes connected with the corresponding position points, namely the connected relation vectors of the illumination probes which can influence the illumination of the position points. And baking the virtual scene based on the connected relation vector group of the position points, so that the connected relation vector group of the position points is bound with the position points in advance. Because the connected relation vector group of the position points can reflect the connected relation between the position points and the adjacent illumination probes, the subsequent connected relation vector group based on the position points can obtain more accurate illumination information when being rendered, thereby improving the true degree of illumination.
The embodiment of the application provides a light leakage inhibiting method for a global illumination technology of an illumination probe. Compared with a light leakage inhibiting method based on the surrounding volume, the method provided by the embodiment of the application can automatically preprocess the virtual scene in the baking stage, so that the labor cost can be saved, and the synchronization of the virtual scene and the illumination information can be ensured; compared with a light leakage inhibition method based on visibility, the method provided by the embodiment of the application avoids the operation of depth information sampling and judgment during operation, and meanwhile, by splitting the illumination probes with different communication relation vector values into different textures, the sampling times of the textures are reduced as much as possible, the overhead is reduced, and the method can be applied to a mobile terminal game.
Fig. 7 is a schematic structural diagram of a baking apparatus for a virtual scene according to an embodiment of the present application, and referring to fig. 7, the apparatus includes: a connected relation vector determination module 701, a connected relation vector group determination module 702, and a baking module 703.
A connection relation vector determining module 701, configured to determine a connection relation vector of the multiple illumination probes based on a connection relation between the multiple illumination probes in the virtual scene.
A connected relation vector group determining module 702, configured to determine a connected relation vector group of a position point based on a connected relation between the position point in the virtual scene and an illumination probe adjacent to the position point, where the connected relation vector group includes a connected relation vector of the illumination probe adjacent to and connected to the position point.
And a baking module 703, configured to bake the virtual scene based on the set of connected relationship vectors of the position points.
In a possible implementation manner, the connectivity vector determining module 701 is configured to divide the plurality of illumination probes into a plurality of illumination probe sets based on the connectivity between the plurality of illumination probes in the virtual scene, where the illumination probes in each illumination probe set are connected to each other. Determining the communication relation vectors of the illumination probes in the plurality of illumination probe groups, wherein the communication relation vectors of the illumination probes in the same illumination probe group are the same, and the communication relation vectors of the illumination probes in different illumination probe groups are orthogonal to each other.
In a possible implementation, the connected relation vector determining module 701 is configured to perform any one of the following:
for a first illumination probe and a second illumination probe of the plurality of illumination probes, under the condition that a connecting line between the first illumination probe and the second illumination probe is not shielded by any virtual object, dividing the first illumination probe and the second illumination probe into the same illumination probe group.
And under the condition that a connecting line between the first illumination probe and the second illumination probe is shielded by any virtual object, dividing the first illumination probe and the second illumination probe into different illumination probe groups.
In a possible implementation, the connected relation vector determining module 701 is configured to perform any one of the following:
for a first illumination probe and a second illumination probe of the plurality of illumination probes, under the condition that a connecting line between the first illumination probe and the second illumination probe is not shielded by any virtual object or is only shielded by a first type of virtual object, the first illumination probe and the second illumination probe are divided into the same illumination probe group, and the first type of virtual object is a transparent virtual object in the virtual scene.
Under the condition that a connecting line between the first illumination probe and the second illumination probe is shielded by any second virtual object, the first illumination probe and the second illumination probe are divided into different illumination probe groups, and the second virtual object is an opaque virtual object in the virtual scene.
In a possible embodiment, the connectivity vector determining module 701 is configured to assign initial connectivity vectors to the illumination probes in the plurality of illumination probe sets based on the relative position relationships between the plurality of illumination probe sets, the illumination probes in each illumination probe set having the same initial connectivity vector, and the illumination probes in any two adjacent and non-connected illumination probe sets in the plurality of illumination probe sets having different initial connectivity vectors; and optimizing the initial communication relation vectors of the illumination probes in the plurality of illumination probe groups by adopting a simulated annealing method to obtain the communication relation vectors of the illumination probes in the plurality of illumination probe groups.
In a possible implementation, the connected relation vector group determining module 702 is configured to perform any one of:
and in the case that the position point is communicated with the illumination probe adjacent to the position point, adding the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point.
And in the case that the position point is not communicated with the illumination probe adjacent to the position point, the communication relation vector of the illumination probe adjacent to the position point is not added to the communication relation vector group of the position point.
In one possible embodiment, the apparatus further comprises:
and the communication relation determining module is used for carrying out ray detection on the illumination probe adjacent to the position point based on the position point and determining the communication relation between the position point and the adjacent illumination probe.
In a possible embodiment, the connectivity determining module is configured to emit, with the position point as a starting point, a ray to an illumination probe adjacent to the position point. In the case where the ray is in contact with any virtual object, it is determined that the location point is not in communication with an adjacent illumination probe. In the case where the ray does not contact any virtual object, it is determined that the location point communicates with an adjacent illumination probe.
In a possible embodiment, the connectivity determining module is configured to emit a ray to an illumination probe adjacent to the location point, with the location point as a starting point. And under the condition that the ray is not contacted with any virtual object or is only contacted with a first type of virtual object in the virtual scene, determining that the position point is communicated with the adjacent illumination probe, wherein the first type of virtual object is a transparent virtual object in the virtual scene. And under the condition that the ray is in contact with any second type of virtual object in the virtual scene, determining that the position point is not communicated with the adjacent illumination probe, wherein the second type of virtual object is an opaque virtual object in the virtual scene.
In a possible implementation manner, the baking module 703 is configured to, for a static virtual object in the virtual scene, bake the static object based on a connected relationship vector group of a plurality of location points on the static virtual object and an illumination map of the static virtual object. And baking the dynamic object in the virtual scene based on the connected relation vector group of the plurality of position points on the dynamic object.
In a possible embodiment, the number of the location points is multiple, and the apparatus further includes:
and the position point merging module is used for merging the same communication relation vector group in the communication relation vector groups of the plurality of position points to obtain a communication relation vector group after merging the plurality of position points, wherein the merged communication relation vector group corresponds to at least one position point in the plurality of position points.
In one possible embodiment, the apparatus further comprises:
and the rendering module is used for rendering the plurality of position points based on the connected relation vector group of the position points and the textures of the plurality of illumination probes.
In a possible embodiment, the rendering module is configured to determine at least one target illumination probe communicating with the location point from the illumination probes adjacent to the location point based on the set of connectivity relation vectors of the location point. Sampling the texture of the at least one target illumination probe at the location point to obtain illumination information of the at least one target illumination probe at the location point. And fusing the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point.
In a possible implementation, the rendering module is configured to perform any one of:
and adding the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point.
And carrying out weighted summation on the illumination information of the at least one target illumination probe at the position point based on the illumination weight between the at least one target illumination probe and the position point to obtain the target illumination information of the position point, wherein the illumination weight is in negative correlation with the distance between the target illumination probe and the position point.
It should be noted that: in the baking apparatus for a virtual scene according to the foregoing embodiment, when baking a virtual scene, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the computer device may be divided into different functional modules to complete all or part of the functions described above. In addition, the image generation apparatus and the image generation method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
According to the technical scheme provided by the embodiment of the application, when the virtual scene is baked, the communication relation vectors of the multiple illumination probes are determined based on the communication relation among the multiple illumination probes. And determining a connected relation vector group of the position points based on the connected relation between the position points and the adjacent illumination probes in the virtual scene, wherein the connected relation vectors stored in the connected relation vector group are the connected relation vectors of the illumination probes connected with the corresponding position points, namely the connected relation vectors of the illumination probes which can influence the illumination of the position points. And baking the virtual scene based on the connected relation vector group of the position points, so that the connected relation vector group of the position points is bound with the position points in advance. Because the connected relation vector group of the position points can reflect the connected relation between the position points and the adjacent illumination probes, when the connected relation vector group based on the position points is rendered, accurate illumination information can be obtained, and therefore the true degree of illumination is improved.
An embodiment of the present application provides a computer device, configured to perform the foregoing method, where the computer device may be implemented as a terminal or a server, and a structure of the terminal is introduced below:
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application. In general, the terminal 800 includes: one or more processors 801 and one or more memories 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 802 is used to store at least one computer program for execution by the processor 801 to implement the baking method of a virtual scene provided by the method embodiments herein.
In some embodiments, the terminal 800 may further optionally include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to the peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, display 805, camera assembly 806, audio circuitry 807, and power supply 808.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited by the embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication.
Power supply 808 is used to provide power to various components in terminal 800. The power source 808 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
In some embodiments, the terminal 800 also includes one or more sensors 809. The one or more sensors 809 include, but are not limited to: acceleration sensor 810, gyro sensor 811, pressure sensor 812, optical sensor 813, and proximity sensor 814.
The acceleration sensor 810 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800.
The gyro sensor 811 may be a body direction and a rotation angle of the terminal 800, and the gyro sensor 811 may cooperate with the acceleration sensor 810 to collect a 3D motion of the user with respect to the terminal 800.
Pressure sensors 812 may be disposed on the side frames of terminal 800 and/or underlying display 805. When the pressure sensor 812 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 812. When the pressure sensor 812 is disposed at the lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805.
The optical sensor 813 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the display screen 805 based on the ambient light intensity collected by the optical sensor 813.
The proximity sensor 814 is used to collect the distance between the user and the front surface of the terminal 800.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The computer device may also be implemented as a server, and the following describes a structure of the server:
fig. 9 is a schematic structural diagram of a server provided in this embodiment, where the server 900 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 901 and one or more memories 902, where the one or more memories 902 store at least one computer program that is loaded by and executed by the one or more processors 901 to implement the methods provided in the foregoing method embodiments. Certainly, the server 900 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 900 may also include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, a computer readable storage medium, such as a memory including a computer program, which is executable by a processor to perform the baking method of a virtual scene in the above embodiments, is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or the computer program comprising program code stored in a computer readable storage medium, the program code being read by a processor of a computer device from the computer readable storage medium, the program code being executed by the processor such that the computer device performs the method provided in the various alternative implementations described above.
In some embodiments, a computer program according to embodiments of the present application may be deployed to be executed on one computer apparatus or on multiple computer apparatuses at one site, or on multiple computer apparatuses distributed at multiple sites and interconnected by a communication network, and the multiple computer apparatuses distributed at the multiple sites and interconnected by the communication network may constitute a block chain system.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (18)

1. A method for baking a virtual scene, the method comprising:
determining a communication relation vector of a plurality of illumination probes based on a communication relation among the plurality of illumination probes in a virtual scene;
determining a connected relation vector group of the position points based on the connected relation between the position points in the virtual scene and the illumination probes adjacent to the position points, wherein the connected relation vector group comprises connected relation vectors of the illumination probes adjacent to and connected with the position points;
and baking the virtual scene based on the connected relation vector group of the position points.
2. The method of claim 1, wherein determining the connectivity vector for the plurality of light probes based on connectivity between the plurality of light probes in the virtual scene comprises:
dividing a plurality of illumination probes into a plurality of illumination probe sets based on the communication relation among the illumination probes in the virtual scene, wherein the illumination probes in each illumination probe set are communicated with one another;
and determining the communication relation vectors of the illumination probes in the plurality of illumination probe groups, wherein the communication relation vectors of the illumination probes in the same illumination probe group are the same, and the communication relation vectors of the illumination probes in different illumination probe groups are orthogonal to each other.
3. The method according to claim 2, wherein the dividing the plurality of illumination probes into a plurality of illumination probe groups based on the connectivity relationship between the plurality of illumination probes in the virtual scene comprises any one of:
for a first illumination probe and a second illumination probe in the plurality of illumination probes, under the condition that a connecting line between the first illumination probe and the second illumination probe is not shielded by any virtual object, dividing the first illumination probe and the second illumination probe into the same illumination probe group;
and under the condition that a connecting line between the first illumination probe and the second illumination probe is shielded by any virtual object, dividing the first illumination probe and the second illumination probe into different illumination probe groups.
4. The method according to claim 2, wherein the dividing the plurality of illumination probes into a plurality of illumination probe groups based on the communication relationship among the plurality of illumination probes in the virtual scene comprises any one of the following:
for a first illumination probe and a second illumination probe in the plurality of illumination probes, under the condition that a connecting line between the first illumination probe and the second illumination probe is not shielded by any virtual object or is only shielded by a first type of virtual object, dividing the first illumination probe and the second illumination probe into the same illumination probe group, wherein the first type of virtual object is a transparent virtual object in the virtual scene;
and under the condition that the connecting line between the first illumination probe and the second illumination probe is shielded by any second virtual object, dividing the first illumination probe and the second illumination probe into different illumination probe groups, wherein the second virtual object is an opaque virtual object in the virtual scene.
5. The method of claim 2, wherein determining the connectivity vector for the photoprobes of the plurality of photoprobes comprises:
allocating initial communication relation vectors to the illumination probes in the plurality of illumination probe sets based on the relative position relations among the plurality of illumination probe sets, wherein the illumination probes in each illumination probe set have the same initial communication relation vector, and the illumination probes in any two adjacent and non-communicated illumination probe sets in the plurality of illumination probe sets have different initial communication relation vectors;
and optimizing the initial communication relation vectors of the illumination probes in the plurality of illumination probe groups by adopting a simulated annealing method to obtain the communication relation vectors of the illumination probes in the plurality of illumination probe groups.
6. The method according to claim 1, wherein the determining the connected relation vector group of the position points based on the connected relation between the position points in the virtual scene and the illumination probes adjacent to the position points comprises any one of the following:
in the case that the position point is communicated with the illumination probe adjacent to the position point, adding the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point;
and under the condition that the position point is not communicated with the illumination probe adjacent to the position point, not adding the communication relation vector of the illumination probe adjacent to the position point to the communication relation vector group of the position point.
7. The method of claim 1, wherein before determining the set of connected component vectors for a location point based on the connected components between the location point and the illumination probe adjacent to the location point in the virtual scene, the method further comprises:
and performing ray detection on the illumination probes adjacent to the position point based on the position point, and determining the communication relation between the position point and the adjacent illumination probes.
8. The method of claim 7, wherein the ray detection is performed on the illumination probe adjacent to the position point based on the position point, and the determining the communication relationship between the position point and the adjacent illumination probe comprises:
taking the position point as a starting point, and emitting rays to an illumination probe adjacent to the position point;
determining that the position point is not communicated with an adjacent illumination probe under the condition that the ray is in contact with any virtual object;
in the case where the ray does not contact any virtual object, it is determined that the location point is in communication with an adjacent illumination probe.
9. The method of claim 7, wherein the ray detection is performed on the illumination probe adjacent to the position point based on the position point, and the determining the communication relationship between the position point and the adjacent illumination probe comprises:
taking the position point as a starting point, and emitting rays to an illumination probe adjacent to the position point;
under the condition that the ray is not in contact with any virtual object or is only in contact with a first type of virtual object in the virtual scene, determining that the position point is communicated with an adjacent illumination probe, wherein the first type of virtual object is a transparent virtual object in the virtual scene;
and under the condition that the ray is in contact with any second type of virtual object in the virtual scene, determining that the position point is not communicated with the adjacent illumination probe, wherein the second type of virtual object is an opaque virtual object in the virtual scene.
10. The method of claim 1, wherein baking the virtual scene based on the set of connected relationship vectors of the location points comprises:
for a static virtual object in the virtual scene, baking the static object based on a connected relation vector group of a plurality of position points on the static virtual object and an illumination map of the static virtual object;
and baking the dynamic object in the virtual scene based on the connected relation vector group of the plurality of position points on the dynamic object.
11. The method according to claim 1, wherein the number of the position points is plural, and before baking the virtual scene based on the connected relation vector group of the position points, the method further comprises:
and combining the same connected relation vector group in the connected relation vector groups of the position points to obtain a combined connected relation vector group of the position points, wherein the combined connected relation vector group corresponds to at least one position point in the position points.
12. The method of claim 1, wherein after baking the virtual scene based on the set of connected relationship vectors for the location points, the method further comprises:
rendering the plurality of position points based on the connected relation vector group of the position points and the texture of the plurality of illumination probes.
13. The method of claim 12, wherein the rendering the plurality of location points based on the set of connected relation vectors for the plurality of location points and the texture of the plurality of illumination probes comprises:
determining at least one target illumination probe communicated with the position point from the illumination probes adjacent to the position point based on the communication relation vector group of the position point;
sampling the texture of the at least one target illumination probe at the position point to obtain illumination information of the at least one target illumination probe at the position point;
and fusing the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point.
14. The method according to claim 13, wherein the fusing the illumination information of the at least one target illumination probe at the location point to obtain the target illumination information of the location point comprises any one of:
adding the illumination information of the at least one target illumination probe at the position point to obtain the target illumination information of the position point;
and carrying out weighted summation on the illumination information of the at least one target illumination probe at the position point based on the illumination weight between the at least one target illumination probe and the position point to obtain the target illumination information of the position point, wherein the illumination weight is in negative correlation with the distance between the target illumination probe and the position point.
15. A baking apparatus of a virtual scene, the apparatus comprising:
the communication relation vector determining module is used for determining communication relation vectors of the plurality of illumination probes based on the communication relation among the plurality of illumination probes in the virtual scene;
a connected relation vector group determination module, configured to determine a connected relation vector group of a position point based on a connected relation between the position point in the virtual scene and an illumination probe adjacent to the position point, where the connected relation vector group includes a connected relation vector of the illumination probe adjacent to and connected to the position point;
and the baking module is used for baking the virtual scene based on the connected relation vector group of the position points.
16. A computer device, characterized in that the computer device comprises one or more processors and one or more memories, in which at least one computer program is stored, the computer program being loaded and executed by the one or more processors to implement the method of baking a virtual scene as claimed in any one of claims 1 to 14.
17. A computer-readable storage medium, in which at least one computer program is stored, which is loaded and executed by a processor to implement a method of baking a virtual scene as claimed in any one of claims 1 to 14.
18. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the method of baking a virtual scene as claimed in any one of the claims 1 to 14.
CN202210470156.0A 2022-04-28 2022-04-28 Baking method, baking device, baking equipment and storage medium of virtual scene Pending CN115120970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210470156.0A CN115120970A (en) 2022-04-28 2022-04-28 Baking method, baking device, baking equipment and storage medium of virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210470156.0A CN115120970A (en) 2022-04-28 2022-04-28 Baking method, baking device, baking equipment and storage medium of virtual scene

Publications (1)

Publication Number Publication Date
CN115120970A true CN115120970A (en) 2022-09-30

Family

ID=83376589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210470156.0A Pending CN115120970A (en) 2022-04-28 2022-04-28 Baking method, baking device, baking equipment and storage medium of virtual scene

Country Status (1)

Country Link
CN (1) CN115120970A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023185287A1 (en) * 2022-04-02 2023-10-05 腾讯科技(深圳)有限公司 Virtual model lighting rendering method and apparatus, storage medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023185287A1 (en) * 2022-04-02 2023-10-05 腾讯科技(深圳)有限公司 Virtual model lighting rendering method and apparatus, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN107886562A (en) Water surface rendering intent, device and readable storage medium storing program for executing
CN111729307B (en) Virtual scene display method, device, equipment and storage medium
CN110827391B (en) Image rendering method, device and equipment and storage medium
CN112370784B (en) Virtual scene display method, device, equipment and storage medium
CN114255315A (en) Rendering method, device and equipment
CN114299220A (en) Data generation method, device, equipment, medium and program product of illumination map
CN115120970A (en) Baking method, baking device, baking equipment and storage medium of virtual scene
KR20140000170A (en) Method for estimating the quantity of light received by a participating media, and corresponding device
CN112884873B (en) Method, device, equipment and medium for rendering virtual object in virtual environment
CN116672706B (en) Illumination rendering method, device, terminal and storage medium
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN112802170A (en) Illumination image generation method, apparatus, device, and medium
CN112306332A (en) Method, device and equipment for determining selected target and storage medium
CN109939442B (en) Application role position abnormity identification method and device, electronic equipment and storage medium
CN116758208A (en) Global illumination rendering method and device, storage medium and electronic equipment
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN112473135A (en) Real-time illumination simulation method, device, equipment and storage medium for mobile game
KR20230013099A (en) Geometry-aware augmented reality effects using real-time depth maps
WO2023169013A1 (en) Global illumination calculation method and apparatus for three-dimensional space, device and storage medium
US20230090732A1 (en) System and method for real-time ray tracing in a 3d environment
CN113426131B (en) Picture generation method and device of virtual scene, computer equipment and storage medium
CN117173314B (en) Image processing method, device, equipment, medium and program product
WO2023029424A1 (en) Method for rendering application and related device
CN116993889A (en) Texture rendering method, device, terminal and storage medium
CN117959700A (en) Scene visibility data generation method, scene loading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination