WO2023165198A1 - 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 - Google Patents
图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 Download PDFInfo
- Publication number
- WO2023165198A1 WO2023165198A1 PCT/CN2022/136193 CN2022136193W WO2023165198A1 WO 2023165198 A1 WO2023165198 A1 WO 2023165198A1 CN 2022136193 W CN2022136193 W CN 2022136193W WO 2023165198 A1 WO2023165198 A1 WO 2023165198A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- texture data
- image
- virtual object
- texture
- Prior art date
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 454
- 238000000034 method Methods 0.000 title claims abstract description 135
- 238000004590 computer program Methods 0.000 title claims abstract description 20
- 238000006243 chemical reaction Methods 0.000 claims abstract description 156
- 238000012545 processing Methods 0.000 claims description 76
- 230000008569 process Effects 0.000 claims description 66
- 230000009466 transformation Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 9
- 230000001902 propagating effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 65
- 230000000694 effects Effects 0.000 description 38
- 238000010586 diagram Methods 0.000 description 23
- 238000013507 mapping Methods 0.000 description 19
- 238000005516 engineering process Methods 0.000 description 18
- 239000000463 material Substances 0.000 description 14
- 238000004422 calculation algorithm Methods 0.000 description 13
- 238000004364 calculation method Methods 0.000 description 13
- 238000010801 machine learning Methods 0.000 description 12
- 230000003993 interaction Effects 0.000 description 8
- 230000008447 perception Effects 0.000 description 7
- 238000012546 transfer Methods 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000009434 installation Methods 0.000 description 5
- 238000012804 iterative process Methods 0.000 description 4
- 238000004806 packaging method and process Methods 0.000 description 4
- 230000016776 visual perception Effects 0.000 description 4
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000005293 physical law Methods 0.000 description 3
- 238000013515 script Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000007123 defense Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000014860 sensory perception of taste Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
Definitions
- the present application relates to computer graphics and image technology, and in particular to an image rendering method, device, electronic equipment, computer-readable storage medium and computer program product.
- the display technology based on graphics processing hardware expands the channels for perceiving the environment and obtaining information, especially the display technology for virtual scenes, which can realize diversified interactions between virtual objects controlled by users or artificial intelligence according to actual application requirements. It has various typical application scenarios, for example, in virtual scenes such as games, it can simulate the real battle process between virtual objects.
- high dynamic range texture resources are usually compressed to reduce the bandwidth consumption caused by texture resources, but it is difficult to achieve high rendering effects when rendering based on compressed texture resources.
- Embodiments of the present application provide an image rendering method, device, electronic equipment, computer-readable storage medium, and computer program product, which can utilize the loss between different rendering images to achieve higher rendering based on less bandwidth-occupied texture data effect, thereby improving the utilization of rendering resources.
- An embodiment of the present application provides an image rendering method, including:
- a real-time rendering process is performed to obtain a target rendering image including the virtual object.
- An embodiment of the present application provides an image rendering device, including: .
- An acquisition module configured to acquire first texture data of a virtual object and conversion parameters corresponding to second texture data of the virtual object, wherein the data volume of the first texture data is smaller than the data volume of the second texture data , and the image information range of the first texture data is smaller than the image information range of the second texture data;
- a fitting module configured to perform fitting rendering processing based on the conversion parameters and the first texture data, to obtain a fitting rendering image including the virtual object;
- a loss module configured to determine a rendering loss between the fitted rendering image and a standard rendering image, and update the conversion parameters and the first texture data based on the rendering loss, wherein the standard rendering image is based on A rendering image including the virtual object obtained by performing standard rendering processing on the second texture data;
- the rendering module is configured to perform real-time rendering processing based on the updated conversion parameters and the updated first texture data to obtain a target rendering image including the virtual object.
- An embodiment of the present application provides an electronic device, including:
- memory for storing computer-executable instructions
- the processor is configured to implement the image rendering method provided in the embodiment of the present application when executing the computer-executable instructions stored in the memory.
- An embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for implementing the image rendering method provided in the embodiment of the present application when executed by a processor.
- An embodiment of the present application provides a computer program product, including a computer program or a computer-executable instruction.
- the computer program or computer-executable instruction is executed by a processor, the image rendering method provided in the embodiment of the present application is implemented.
- the image range information of the texture data is better than that of the first texture data, and the data volume of the first texture data is smaller than that of the second texture data, so when real-time image rendering is performed based on the updated first texture data and conversion parameters, it can only consume less storage space and computing resources to achieve the rendering effect of the second texture data, thereby effectively improving the utilization of rendering resources.
- FIG. 1A-FIG. 1B are schematic diagrams of the architecture of the image rendering system provided by the embodiment of the present application.
- FIG. 2 is a schematic structural diagram of an electronic device for image rendering provided by an embodiment of the present application
- 3A-3C are schematic flowcharts of the image rendering method provided by the embodiment of the present application.
- Fig. 4 is a rendering schematic diagram of an image rendering method provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of a fitting rendering process of an image rendering method provided in an embodiment of the present application.
- 6A-6E are schematic diagrams of dynamic range conversion of the image rendering method provided by the embodiment of the present application.
- FIGS. 7A-7B are rendering schematic diagrams of the image rendering method provided by the embodiment of the present application.
- FIG. 8 is a schematic diagram of a rasterized rendering process of an image rendering method provided in an embodiment of the present application.
- FIG. 9 is a schematic diagram of a machine learning fitting scene of an image rendering method provided in an embodiment of the present application.
- Fig. 10 is a schematic diagram of fitting texture of the image rendering method provided by the embodiment of the present application.
- Fig. 11 is a schematic flow chart of machine learning fitting of the image rendering method provided by the embodiment of the present application.
- Fig. 12 is a schematic diagram of the initialization value of the first texture data of the image rendering method provided by the embodiment of the present application.
- Fig. 13 is a screen space rendering diagram of a virtual object in the image rendering method provided by the embodiment of the present application.
- Fig. 14 is a screen space rendering diagram of a virtual object in the image rendering method provided by the embodiment of the present application.
- Fig. 15 is a screen space rendering diagram of fitting of a virtual object in the image rendering method provided by the embodiment of the present application.
- Fig. 16 is a schematic diagram of the pixel difference value in the screen space between the standard rendering result and the fitting result of the image rendering method provided by the embodiment of the present application;
- Fig. 17 is a schematic diagram of the loss effect of the image rendering method provided by the embodiment of the present application.
- Fig. 18 is a schematic diagram of fitting a dynamic range space conversion function of the image rendering method provided by the embodiment of the present application.
- first ⁇ second are only used to distinguish similar objects, and do not represent a specific order for objects. Understandably, “first ⁇ second” can be The specific order or sequencing is interchanged such that the embodiments of the application described herein can be practiced in other sequences than illustrated or described herein.
- Client an application running on a terminal for providing various services, such as a video playback client, a game client, and the like.
- Virtual scene a virtual game scene displayed (or provided) when the game application runs on the terminal.
- the virtual scene can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual environment, or a purely fictional virtual environment.
- the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimensions of the virtual scene.
- the virtual scene may include sky, land, ocean, etc.
- the land may include environmental elements such as deserts and cities, and the user may control virtual objects to move in the virtual scene.
- Virtual objects images of various people and objects that can interact in the virtual scene, or movable objects in the virtual scene.
- the movable object may be a virtual character, a virtual animal, an animation character, etc., for example, a character, an animal, etc. displayed in a virtual scene.
- the virtual object may be a virtual avatar representing the user in the virtual scene.
- the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
- Image rendering the process of converting a three-dimensional radiosity process into a two-dimensional image. Scenes and objects are expressed in three-dimensional form, which is closer to the real world and easy to manipulate and transform, while most image display devices are two-dimensional raster displays and dot-matrix printers. From the representation of the three-dimensional scene to the representation of N-dimensional raster and lattice, it is image rendering, that is, rasterization.
- the raster display can be regarded as a pixel matrix, and any image displayed on the raster display is actually a collection of pixels with one or more colors and grayscales.
- Tone Mapping Its function is to map the color of high dynamic range HDR to low dynamic range LDR color, so that the display can display normally, and the function of the corresponding dynamic range space conversion function (Inverse Tone Mapping) is to convert The color mapping of the low dynamic range LDR to the high dynamic range HDR color restores the color of the original brightness range.
- High-dynamic range High-Dynamic Range
- high-dynamic-range images can provide more dynamic range (for example, the width of the dynamic range is greater than the width threshold) compared with ordinary images, and have a higher degree of detail (For example, the degree of detail exceeds the threshold of the degree of detail), according to the low dynamic range (LDR, Low-Dynamic Range) images of different exposure times, the LDR image corresponding to the best detail of each exposure time is used to synthesize the final HDR image, It can better reflect the visual effect of people in the real environment.
- LDR Low-Dynamic Range
- Low dynamic range (LDR, Low-Dynamic Range)
- the width of the dynamic range is not greater than the width threshold, and the degree of detail does not exceed the threshold of detail level, called low dynamic range, which will cause the loss of details in highlights or shadows.
- dynamic range is measured as the difference in exposure values.
- High Dynamic Range Rendering High Dynamic Range Rendering
- High Dynamic Range Texture High Dynamic Range Texture
- Low dynamic range texture is a low dynamic range texture Textures, when the lighting brightness range in the game is relatively large or the brightness difference in different areas is obvious, the difference in rendering effect is more obvious.
- HDRR and HDRT When implementing the embodiment of the present application, the applicant found that although the combination of HDRR and HDRT can provide a more realistic rendering effect, the rendering cost required by the combination of HDRR and HDRT is relatively high. For example, in uncompressed HDRT, each Each channel requires 32 bits, thus each pixel (including 3 channels) requires 96 bits, whereas in uncompressed LDRT, each channel requires 8 bits, and can be even smaller after compression.
- HDRR and HDRT will not only bring the burden of package body and memory occupation to the game, but also bring more bandwidth overhead and calculation burden when rendering, so for each game, especially It is a mobile game that is more scarce in resources.
- HDRT will be compressed and HDRR will be calculated and optimized. The purpose is to minimize the package and bandwidth occupation brought by HDRT, reduce the calculation consumption of HDRR, and try to approach the original HDRT and rendering effects provided by HDRR.
- a method of compressing HDRT to LDRT is provided in the related art, by calculating the difference value between LDRT and HDRT in the texture space, and then using the Levenberg-Marquadt algorithm to obtain the conversion parameter of the tone mapping function that minimizes the difference value, using this By converting the parameters, HDRT can be converted into LDRT for packaging.
- the applicant finds that only the texture difference value in the texture space is considered in the related art, and the difference value between rendering results should be considered for rendering, so that the obtained result will be more accurate.
- Embodiments of the present application provide an image rendering method, device, electronic equipment, computer-readable storage medium, and computer program product, which can utilize the loss between different rendering images to achieve higher rendering based on less bandwidth-occupied texture data effect, thereby improving the utilization of rendering resources.
- an exemplary implementation scenario of the image rendering method provided by the embodiment of the present application is first described.
- the virtual objects in the image rendering method provided by the embodiment of the present application can be completely based on terminal output , or based on terminal and server collaborative output.
- the virtual scene can be an environment for game characters to interact, for example, it can be used for game characters to fight in the virtual scene, by controlling the actions of the game characters, the two sides can interact in the virtual scene, so that users can play in the virtual scene. Relieve the stress of life during the game.
- FIG. 1A is a schematic diagram of the architecture of an image rendering system provided by the embodiment of the present application, which is suitable for some calculations related to the virtual scene 100 that are completely dependent on the computing power of the graphics processing hardware of the terminal 400.
- the application mode of the application mode such as a stand-alone version/offline mode game, completes the output of the virtual scene through various types of terminals 400 such as smart phones, tablet computers and virtual reality/augmented reality devices.
- the type of graphics processing hardware includes a central processing unit (CPU, Central Processing Unit) and a graphics processing unit (GPU, Graphics Processing Unit).
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the terminal 400 calculates and displays the required data through the graphics computing hardware, and completes the loading, parsing and rendering of the display data, and outputs video frames capable of forming a visual perception of the virtual scene on the graphics output hardware, For example, a two-dimensional video frame is presented on the display screen of a smartphone, or a three-dimensional display effect is projected on the lenses of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the terminal 400 can also use different hardware to form one or more of auditory perception, tactile perception, motion perception, and taste perception.
- a client 410 (such as a stand-alone game application) is running on the terminal 400, and a virtual scene including role-playing is output during the running of the client 410.
- the virtual scene can be an environment for game characters to interact, for example, These are plains, streets, valleys, etc. for game characters to fight against; taking the first-person perspective to display the virtual scene 100 as an example, a virtual object 401 is displayed in the virtual scene 100, and the virtual object 401 can be controlled by the user (or player).
- the game character controlled will operate in the virtual scene in response to the real user's operation on the buttons (including joystick button, attack button, defense button, etc.).
- the virtual object 401 can also be an artificial intelligence (AI, Artificial Intelligence) that is set in the virtual scene by training. ); the virtual object 401 can also be a non-user character (NPC, Non-Player Character) set in the virtual scene interaction; the virtual object 401 can also be an inactive object or a movable object in the virtual scene 100.
- AI Artificial Intelligence
- NPC Non-Player Character
- the terminal 400 is a terminal used by game developers. During the development and packaging stage of the game, the terminal 400 acquires the first texture data of the virtual object 401 and the conversion parameters of the second texture data corresponding to the virtual object, wherein the first The data volume of the first texture data is smaller than the data volume of the second texture data, and the image information range of the first texture data is smaller than the image information range of the second texture data; based on the conversion parameters and the first texture data, a fitting rendering process is performed to obtain Including a fitting rendering image of a virtual object; determining a rendering loss between the fitting rendering image and a standard rendering image, and updating conversion parameters and first texture data based on the rendering loss, wherein the standard rendering image is standardized based on the second texture data
- the rendered image including the virtual object obtained by the rendering process is the processing performed before the game starts so far.
- the terminal 400 performs real-time rendering based on the updated conversion parameters and the updated first texture data.
- the rendering process is to obtain the target rendering image including the virtual object, and perform human-computer interaction of the virtual scene based on the target rendering image, such as game confrontation.
- FIG. 1B is a schematic diagram of the architecture of the image rendering system provided by the embodiment of the present application, which is applied to the terminal 400 and the server 200, and is suitable for relying on the computing power of the server 200 to complete the calculation of the virtual scene, and The application mode of the virtual scene is output at the terminal 400 .
- the server 200 calculates the display data related to the virtual scene (such as scene data) and sends it to the terminal 400 through the network 300.
- the terminal 400 relies on graphics computing hardware to complete the loading and analysis of the display data.
- graphics output hardware to output virtual scenes to form visual perception
- two-dimensional video frames can be presented on the display screen of a smartphone, or a video that can be projected on the lenses of augmented reality/virtual reality glasses to achieve three-dimensional display effects Frame; for the perception of the form of the virtual scene, it can be understood that the corresponding hardware output of the terminal 400 can be used, such as using a microphone to form an auditory perception, using a vibrator to form a tactile perception, and so on.
- the terminal 400 runs a client 410 (such as a game application in the online version), and interacts with other users by connecting to the server 200 (such as a game server), and the terminal 400 outputs the virtual scene 100 of the client 410.
- client 410 such as a game application in the online version
- server 200 such as a game server
- the terminal 400 outputs the virtual scene 100 of the client 410.
- the perspective display virtual scene 100 Take the perspective display virtual scene 100 as an example.
- a virtual object 401 is displayed.
- the virtual object 401 can be a game character controlled by the user (or player), and will respond to the real user's actions on buttons (including rocker buttons, attack buttons, etc.) , defense buttons, etc.) to operate in the virtual scene, for example, when the real user moves the joystick button to the left, the virtual object will move to the left in the virtual scene, and can also stay still, jump and use various Function (such as skills and props); virtual object 401 can also be the artificial intelligence (AI, Artificial Intelligence) that is set in the virtual scene battle by training; virtual object 401 can also be the non-user character (NPC) that is set in the virtual scene interaction , Non-Player Character); The virtual object 401 can also be an inactive object or an active object in the virtual scene 100.
- buttons including rocker buttons, attack buttons, etc.
- the server 200 acquires the first texture data of the virtual object 401 and the conversion parameters of the second texture data corresponding to the virtual object, wherein the data volume of the first texture data is smaller than the data volume of the second texture data, and the first The image information range of the texture data is smaller than the image information range of the second texture data; based on the conversion parameters and the first texture data, performing a fitting rendering process to obtain a fitting rendering image including a virtual object; determining the fitting rendering image and the standard rendering image The rendering loss between them, and update the conversion parameters and the first texture data based on the rendering loss.
- the standard rendering image is a rendering image including virtual objects obtained by performing standard rendering processing based on the second texture data.
- the terminal 400 receives the updated first texture data sent by the server 200 and the updated conversion parameters (which can be encapsulated in the installation package), during the game installation and running process, the terminal 400 based on
- the updated conversion parameters and the updated first texture data are subjected to real-time rendering processing to obtain a target rendering image including the virtual object, and human-computer interaction of the virtual scene is performed based on the target rendering image, such as game confrontation.
- the terminal 400 can implement the image rendering method provided by the embodiment of the present application by running a computer program
- the computer program can be a native program or a software module in the operating system; it can be a local (Native) application program (APP, APPlication), that is, a program that needs to be installed in the operating system to run, such as a game changer APP (that is, the above-mentioned client 410); it can also be a small program, that is, it can be run only after being downloaded to the browser environment program; it can also be a small game program that can be embedded in any APP.
- the above-mentioned computer program can be any form of application program, module or plug-in.
- the terminal 400 installs and runs an application program supporting a virtual scene.
- the application program may be any one of a first-person shooter game (FPS, First-Person Shooting game), a third-person shooter game, a virtual reality application program, a three-dimensional map program, or a multiplayer gun battle survival game.
- the user uses the terminal 400 to operate the virtual objects located in the virtual scene to carry out activities, such activities include but not limited to: adjusting body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building virtual buildings at least one of the Schematically, the virtual object may be a virtual character, such as a simulated character or an anime character.
- Cloud Technology refers to the unification of a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data calculation, storage, A hosted technology for processing and sharing.
- Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. It can form a resource pool and be used on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background service of the technical network system requires a large amount of computing and storage resources.
- the server 200 in FIG. 1B can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, and cloud storage. , network services, cloud communications, middleware services, domain name services, security services, CDN, and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
- the terminal 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
- the terminal and the server may be connected directly or indirectly through wired or wireless communication, which is not limited in this embodiment of the present invention.
- FIG. 2 is a schematic structural diagram of an electronic device for image rendering provided by an embodiment of the present application.
- the terminal 400 shown in FIG. 2 includes: at least one processor 410, a memory 450, at least one network interface 420 and a user interface 430.
- Various components in the terminal 400 are coupled together through a bus system 440 .
- the bus system 440 is used to realize connection and communication among these components.
- the bus system 440 also includes a power bus, a control bus and a status signal bus.
- the various buses are labeled as bus system 440 in FIG. 2 .
- Processor 410 can be a kind of integrated circuit chip, has signal processing capability, such as general processor, digital signal processor (DSP, Digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware Components, etc., wherein the general-purpose processor can be a microprocessor or any conventional processor, etc.
- DSP digital signal processor
- DSP Digital Signal Processor
- User interface 430 includes one or more output devices 431 that enable presentation of media content, including one or more speakers and/or one or more visual displays.
- the user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
- Memory 450 may be removable, non-removable or a combination thereof.
- Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
- Memory 450 optionally includes one or more storage devices located physically remote from processor 410 .
- Memory 450 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
- the non-volatile memory can be a read-only memory (ROM, Read Only Memory), and the volatile memory can be a random access memory (RAM, Random Access Memory).
- ROM read-only memory
- RAM random access memory
- the memory 450 described in the embodiment of the present application is intended to include any suitable type of memory.
- memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
- Operating system 451 including system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and processing hardware-based tasks;
- the network communication module 452 is used to reach other electronic devices via one or more (wired or wireless) network interfaces 420.
- Exemplary network interfaces 420 include: Bluetooth, Wireless Compatibility Authentication (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
- Presentation module 453 for enabling presentation of information via one or more output devices 431 (e.g., display screen, speakers, etc.) associated with user interface 430 (e.g., a user interface for operating peripherals and displaying content and information );
- output devices 431 e.g., display screen, speakers, etc.
- user interface 430 e.g., a user interface for operating peripherals and displaying content and information
- the input processing module 454 is configured to detect one or more user inputs or interactions from one or more of the input devices 432 and translate the detected inputs or interactions.
- the image rendering device provided by the embodiment of the present application can be realized by software.
- FIG. 2 shows an image rendering device 455 stored in the memory 450, which can be software in the form of programs and plug-ins, including the following Software modules: acquisition module 4551, fitting module 4552, loss module 4553, and rendering module 4554. These modules are logical, so they can be combined or further divided arbitrarily according to the realized functions. The function of each module will be explained below.
- the image rendering method provided by the embodiment of the present application will be specifically described below with reference to the accompanying drawings.
- the image rendering method provided in the embodiment of the present application may be executed solely by the terminal 400 in FIG. 1A , or may be executed cooperatively by the terminal 400 and the server 200 in FIG. 1B .
- FIG. 3A is a schematic flowchart of an image rendering method provided by an embodiment of the present application, which will be described in conjunction with steps 101 to 104 shown in FIG. 3A .
- FIG. 3A can be executed by various forms of computer programs running on the terminal 400, and is not limited to the above-mentioned client 410, but can also be the above-mentioned operating system 451, software modules and scripts. , so the client should not be regarded as limiting the embodiment of this application.
- step 101 first texture data of a virtual object and conversion parameters corresponding to second texture data of the virtual object are acquired.
- the conversion parameters refer to the parameters involved in the rendering process using the second texture data.
- the virtual object is in the state to be rendered, and the virtual object in the state to be rendered is the base object model, wherein the base object model Including the torso of the virtual object, excluding the display information used to decorate the virtual object, for example, the display information includes makeup (such as lip shape, eye shadow, pupil, iris, blush, etc.) Clothing for the limbs (such as costumes, combat uniforms, etc.) and so on.
- the first texture data may be an LDRT texture
- the second texture data may be an HDRT texture
- the LDRT texture includes a first color value for each texel
- the HDRT texture includes a second color value for each texel
- the first texture The amount of data is less than that of the second texture data.
- each channel requires 32 bits, so each pixel (including 3 channels) requires 96 bits.
- each channel 8 bits are required, thus 24 bits per pixel (including 3 channels).
- the image information range of the first texture data is smaller than the image information range of the second texture data
- the image information range includes a dynamic range
- the dynamic range is the ratio of the highest brightness to the lowest brightness
- the dynamic range of HDRT is higher than that of LDRT.
- the first texture data can be the pixel parameters of each pixel in the initialized texture image, including material parameters and texture parameters, and the first texture data can also be obtained through the n-1th iteration in the iterative process, so that The first texture data obtained in the n-1th iteration is used as the input of the nth iteration, where n is an integer greater than 1, and subsequent steps 102 and 103 need to be performed for each iteration.
- the first texture data is the pixel parameter of each pixel in the initialized texture image
- step 102 based on the conversion parameters and the first texture data, a fitting rendering process is performed to obtain a fitting rendering image including the virtual object.
- the fitting rendering process is performed on the virtual object, and the fitting rendering process is performed on the virtual object based on the conversion parameters and the first texture data to obtain a fitting rendering image including the virtual object.
- the obtained first texture data is used as a resource, and the first texture data is fitted and rendered in an unlit material (Unlit) rendering manner to obtain a fitted rendering image including the virtual object.
- Unlit unlit material
- FIG. 3B is a schematic flow chart of the image rendering method provided by the embodiment of the present application.
- step 102 based on the conversion parameters and the first texture data, a fitting rendering process is performed to obtain a virtual object including virtual objects.
- Composite rendering may be implemented through steps 1021 to 1022 in FIG. 3B.
- step 1021 based on the conversion parameters, the first texture data is transformed into the second texture data to obtain third texture data in the same dynamic range space as the second texture data.
- the first texture data includes a first color value of each texel
- the second texture data includes a second color value of each texel
- the conversion parameters include a first first-order parameter and a second first-order parameter
- the first texture data of the virtual object is transformed towards the second texture data to obtain the third texture data in the same dynamic range space as the second texture data, which can be realized through the following technical solutions : perform the following processing for each texel in the first texture data: determine the first product result between the first color value of the texel and the first first-order parameter; determine the first product result and the second first-order parameter The first summation result; the ratio between the first color value of the texel and the first summation result is determined as the third color value of the texel in the same dynamic range space as the second color value; multiple textures The third color value of the pixel, which constitutes the third texture data.
- the space transformation process is realized by a first-order high dynamic range space transformation function
- the first-order high dynamic range space transformation function refers to formula (1):
- p and q are the parameters that need to be fitted for the first-order tone mapping function
- p is the first first-order parameter
- q is the second first-order parameter
- different p correspond to different curves
- c H is the third Color value
- c L is the first color value
- different curves corresponding to different p and q are shown in Fig. 6D
- the abscissa of the curve is the first color value
- the ordinate of the curve is the third color value.
- the first texture data includes a first color value of each texel
- the second texture data includes a second color value of each texel
- the conversion parameters include a first second-order parameter, a second second-order parameter , the third second-order parameter, the fourth second-order parameter, and the fifth second-order parameter
- step 1021 based on the conversion parameters, the first texture data of the virtual object is transformed towards the second texture data to obtain the third texture data
- step 1021 based on the conversion parameters, the first texture data of the virtual object is transformed towards the second texture data to obtain the third texture data
- step 1021 based on the conversion parameters, the first texture data of the virtual object is transformed towards the second texture data to obtain the third texture data
- step 1021 based on the conversion parameters, the first texture data of the virtual object is transformed towards the second texture data to obtain the third texture data
- step 1021 based on the conversion parameters, the first texture data of the virtual object is transformed towards the second texture data to obtain the third texture data
- step 1021 based on the conversion parameters
- the space conversion process is realized by a second-order high dynamic range space conversion function
- the first-order high dynamic range space conversion function is shown in formula (2):
- p, q, r, s, and t are the parameters that need to be fitted for the second-order high dynamic range space conversion function
- p is the first and second-order parameters
- q is the second and second-order parameters
- r is the third and second-order parameters Parameters
- s is the fourth second-order parameter
- t is the fifth second-order parameter
- different combinations of p, q, r, s, t correspond to different curves
- c H is the third color value
- c L is the first color value
- Fig. 6E shows different curves corresponding to different combinations of p, q, r, s, t, the abscissa of the curve is the first color value
- the ordinate of the curve is the third color value.
- step 1022 based on the third texture data, a fitting rendering process is performed to obtain a fitting rendering image including the virtual object.
- the third texture data includes the third color value of each texel; in step 1022, based on the third texture data, a fitting rendering process is performed to obtain a fitting rendering image including a virtual object, which can be achieved through the following techniques Solution implementation: obtain the two-dimensional texture coordinates of the virtual object; obtain the differentiable rendering frame corresponding to the fitting rendering process; forward-propagate the two-dimensional texture coordinates and the third color value of each texel in the differentiable rendering frame, A fitted rendering including the virtual object is obtained.
- the differentiable rendering framework is obtained by software-encapsulating the hardware-based rendering process.
- the hardware-based rendering process can be a rendering method without light-receiving materials. Due to the software-based encapsulation, it can be differentiated and guided, so it can be based on gradients in the future. Do backpropagation.
- the two-dimensional texture coordinates of the virtual object are acquired, the two-dimensional texture coordinates are predetermined or automatically generated, the two-dimensional texture coordinates may be UV2 coordinates, and all image files are two-dimensional planes.
- the horizontal direction is U, and the vertical direction is V. Any pixel on the image can be located through the two-dimensional plane, which defines the position information of each point on the image.
- the position of UV coordinates is to accurately correspond each point on the image to the surface of the model object, obtain the differentiable rendering frame corresponding to the fitting rendering process, and convert the two-dimensional texture coordinates and the third color value of each texture pixel in the differentiable Forward propagation is performed in the rendering framework to obtain a fitting rendering image including virtual objects.
- the data participating in the forward propagation also includes rendering resources such as materials and lighting.
- FIG 8 is the rasterized rendering process provided by the embodiment of this application Schematic diagram, the entire rasterized rendering process after the transformation is realized through the unified electronic device architecture (CUDA, Compute Unified Device Architecture), and runs as a module of the machine learning library (such as PyTorch), where CUDA is the above-mentioned differentiable rendering Framework, the entire rendering process can backpropagate gradients, so the entire process is differentiable.
- CUDA Compute Unified Device Architecture
- machine learning library such as PyTorch
- step 103 the rendering loss between the fitted rendering image and the standard rendering image is determined, and the conversion parameters and the first texture data are updated based on the rendering loss.
- the orientation of the virtual object in the standard rendering image is the same as the orientation set during the fitting rendering in the iterative process
- the standard rendering image is a rendering image including the virtual object obtained by performing standard rendering processing based on the second texture data.
- the standard rendering processing may be physically based rendering processing. Specifically, based on the second texture data of the virtual object, the physically based rendering processing corresponding to the virtual object is performed to obtain a standard rendering image.
- the second texture data of the virtual object is loaded through the rendering technology based on physical laws (PBR, Physically Based Rendering), and the second texture data is rendered based on the physical laws to obtain a standard rendering image conforming to the physical laws.
- PBR Physically Based Rendering
- FIG. 3C is a schematic flow chart of the image rendering method provided by the embodiment of the present application.
- step 103 the rendering loss between the fitted rendering image and the standard rendering image can be determined through the Step 1031 to step 1032 are implemented.
- step 1031 the overall pixel value difference between the standard rendering image and the fitting rendering image in screen space is determined.
- the determination of the overall pixel value difference between the standard rendering image and the fitting rendering image in screen space in step 1031 can be achieved through the following technical solutions:
- the pixel performs the following processing: determine the first pixel value of the corresponding pixel in the fitting rendering image, and determine the second pixel value of the corresponding pixel in the standard rendering image; calculate the difference between the first pixel value and the second pixel value
- the absolute value is used as the pixel value difference of the corresponding pixel; the pixel value difference corresponding to multiple pixels is summed to obtain the overall pixel value difference. Taking each pixel as the minimum unit for measuring the difference can effectively increase the value of the rendering loss, so that when updating based on the rendering loss, conversion parameters and first texture data with better rendering effects can be obtained.
- the following processing is performed for each identical pixel in screen space for both the fitted rendering and the standard rendering: determining a first pixel value for a corresponding pixel in the fitted rendering, and determining a corresponding pixel in the standard rendering The second pixel value of the second pixel value; the absolute value of the difference between the first pixel value and the second pixel value is used as the pixel value difference of the corresponding pixel; the pixel value difference corresponding to multiple pixels is summed to obtain the overall pixel value difference.
- a rendering loss is determined based on the overall pixel value difference, the length of the fitted rendered image, and the width of the fitted rendered image.
- the rendering loss is obtained based on formula (3):
- Img1 and Img2 represent the standard rendering image and the fitting rendering image respectively
- H and W represent the length and width of Img1 (or Img2) respectively
- (i, j) indicates any pixel in the screen space in the standard rendering image.
- rendering loss in the embodiment of the present application is not limited to the formula (3), and may also be other deformation formulas.
- updating the conversion parameters and the first texture data at the rendering loss may be implemented through the following technical solutions: performing partial derivative processing on the first texture data based on the rendering loss to obtain the gradient corresponding to the first texture data; Perform partial derivative processing on the conversion parameters based on the rendering loss to obtain gradients corresponding to the conversion parameters; update the first texture data based on the gradients corresponding to the first texture data, and update the conversion parameters based on the gradients corresponding to the conversion parameters.
- the aforementioned updating of the first texture data based on the gradient of the first texture data can be achieved through the following technical solution: multiply the set learning rate by the gradient corresponding to the first texture data to obtain the gradient of the first texture data Data change value; the data change value of the first texture data is added to the first texture data to obtain updated first texture data; the above-mentioned conversion parameter update based on the gradient of the corresponding conversion parameter can be realized by the following technical scheme: set The fixed learning rate is multiplied by the gradient of the corresponding conversion parameter to obtain the data change value of the conversion parameter; the data change value of the conversion parameter is added to the conversion parameter to obtain an updated conversion parameter.
- the process of updating the first texture data based on the rendering loss is similar to the backpropagation process of machine learning.
- the first texture data is input to the differentiable rasterizer, and the forward rendering of the differentiable rasterizer The process of outputting a fitting rendering image. Since the output result (fitting rendering image) of the differentiable rasterization renderer has an error (rendering loss) with the standard rendering image, the error between the output result and the standard rendering image is calculated, and based on The error is backpropagated, and the backpropagation is implemented based on the gradient descent algorithm.
- the first texture data and conversion parameters are adjusted according to the error, for example, the partial derivative of the rendering loss to the first texture data is obtained , to generate the gradient of the rendering loss to the first texture data. Since the direction of the gradient indicates the direction of error expansion, the gradient descent algorithm is used to update the first texture data, and the partial derivative of the rendering loss to the conversion parameter is obtained to generate the rendering loss to the conversion parameter. Gradient, because the direction of the gradient indicates the direction of error expansion, the gradient descent algorithm is used to update the conversion parameters, and the above process is iterated continuously until the iteration end condition is met.
- step 104 real-time rendering is performed based on the updated conversion parameters and the updated first texture data to obtain a target rendering image including the virtual object.
- step 104 the real-time rendering process is performed based on the updated conversion parameters and the updated first texture data, which may be implemented through the following technical solutions: when the rendering loss is less than the loss threshold, based on the updated conversion parameters and Perform real-time rendering processing on the first texture data; or perform real-time rendering processing based on the updated conversion parameters and the updated first texture data when the update count reaches the update count threshold.
- the iteration end condition includes at least one of the following: the value of the rendering loss function is less than a loss threshold; the number of iterations reaches a set number.
- the updated conversion parameters and the updated first texture data are used as the final saved rendering basic resources for rendering processing.
- the target rendering is defined by setting the loss threshold
- the degree of similarity between the image and the standard rendering image when the loss threshold is smaller, the similarity between the target rendering image and the standard rendering image is more similar; when the loss threshold is not limited, the target rendering image and the standard rendering image are defined by setting the number of iterations
- the degree of similarity between the images when the set iteration number is larger, the similarity between the target rendering image and the standard rendering image is more. Judging whether real-time rendering can be performed based on the currently updated first texture data and conversion parameters by using the loss threshold or the number of updates can improve fitting efficiency and prevent waste of fitting resources.
- step 104 based on the updated conversion parameters and the updated first texture data, real-time rendering processing is performed to obtain the target rendering image including the virtual object, which can be achieved through the following technical solutions: based on the updated conversion parameters, the The updated first texture data is subjected to space conversion processing towards the second texture data to obtain fourth texture data in the same dynamic range space as the second texture data; at least one two-dimensional texture coordinate of the virtual object is determined; for each two-dimensional The texture coordinates perform the following processing: sample the texture image corresponding to the two-dimensional texture coordinates from the fourth texture data, and perform fitting processing on the sampled texture images; based on the fitting results of each two-dimensional texture coordinates, generate A target rendering of a virtual object.
- the two-dimensional texture coordinates are predetermined or automatically generated, the two-dimensional texture coordinates can be UV2 coordinates, and the UV2 coordinates define the position information of each point on the image, these points It is interrelated with the model to determine the position of the surface texture map.
- the UV coordinates are to accurately correspond to each point on the image to the surface of the model object.
- the fourth texture data is sampled through UV2, and the fourth texture data obtained by sampling is formed.
- the texture map of the virtual object is pasted on the base model of the virtual object, so as to realize the real-time rendering process.
- the first texture data and the second texture data are respectively rendered, and the first texture data and the conversion parameters involved in rendering based on the first texture data are updated based on the loss between the rendering results, because
- the image range information of the second texture data is better than that of the first texture data, and the data volume of the first texture data is due to the second texture data, so when performing real-time image rendering based on the updated first texture data and conversion parameters, it can achieve
- the rendering effect of the second texture data only consumes less storage space and computing resources, thereby effectively improving the utilization rate of rendering resources.
- the server obtains the first texture data of the virtual object and the conversion parameters of the second texture data corresponding to the virtual object, wherein the data amount of the first texture data is smaller than the data amount of the second texture data, and the first The image information range of the texture data is smaller than the image information range of the second texture data; based on the conversion parameters and the first texture data, performing a fitting rendering process to obtain a fitting rendering image including a virtual object; determining the fitting rendering image and the standard rendering image The rendering loss between them, and update the conversion parameters and the first texture data based on the rendering loss.
- the standard rendering image is the rendering image including virtual objects obtained by performing standard rendering processing based on the second texture data. So far, it is executed before the game starts. Processing, for example, in the development and packaging stage of the game, the terminal receives the updated first texture data and the updated conversion parameters (which can be encapsulated in the installation package) sent by the server, during the game installation and running process, the terminal receives the updated conversion parameters and The updated first texture data is used to perform real-time rendering processing on the virtual object to obtain a target rendering image including the virtual object, and perform human-computer interaction on the virtual scene based on the target rendering image, such as game confrontation. It should be noted that the embodiment of this application is applicable to most mobile game projects, especially mobile game projects that require high rendering performance and rendering effects. This solution can greatly improve rendering efficiency and achieve the difference from the original effect. very small.
- Fig. 4 is a rendering schematic diagram provided by the embodiment of the present application.
- the rendering effect in the game of the embodiment of the present application only needs one texture and the corresponding UV2 vertex data, which can be regarded as the texture that will be fitted 401 uses the second texture coordinate (UV2) to paste on the model 402 (that is, the virtual object to be rendered).
- UV2 second texture coordinate
- Fig. 5 is a schematic flow chart of the fitting rendering provided by the embodiment of the present application, as shown in Fig. 5, the rendering process is a very complicated function f, its input is x, and x is a parameter set (to be fitted parameters), including data such as model vertex positions, material parameters, texture parameters, etc., and the output y of f is the rendering result.
- the embodiment of this application will not optimize data such as vertex positions. Only the first texture data and conversion parameters in x need to be optimized.
- the goal is to calculate x to make f(x) as close to HDRR as possible given the HDRR rendering result Rendering results, the degree of approximation is calculated using a loss function (Loss). by the following formula Calculate the partial differential of the fitting parameters of the fitting rendering image, through the following formula Calculate the partial differential of the loss function to the fitted rendering image, through the following formula Computes the partial derivative of the loss function for the parameters to be fitted.
- the optimization algorithm uses the gradient descent algorithm.
- f is a very complicated function, and the normal rendering process is not differentiable. Therefore, in order to use the gradient descent algorithm to find the optimal x, f needs to be differentiable.
- the embodiment of this application provides a differentiable (that is, differentiable) function. Rasterized rendering framework. It should be noted that f is a complete rasterization rendering process. In order to make it differentiable, the embodiment of the present application improves the rasterization rendering. See FIG. CUDA, Compute Unified Device Architecture), and run as a module of a machine learning library (such as PyTorch), the entire rendering process can backpropagate the gradient, so the entire process is differentiable.
- a machine learning library such as PyTorch
- the final stored texture data is LDRT, and the storage range is the data between [0-1]. It will be converted to the high dynamic range space during rendering, and then based on the converted data For rendering, the data range of the high dynamic range space is wider, including more data that the human visual system can perceive.
- the function used in this conversion process is the high dynamic range space conversion function, which can be regarded as the inverse of tone mapping. process.
- the tone mapping function Reinhard can refer to formula (4), and its function curve is shown in the dotted line curve in Fig. 6A:
- c H is the color value of the high dynamic range space
- c L is the color value of the low dynamic range space
- its corresponding high dynamic range space conversion function (Inverse Reinhard) can be referred to formula (5), and its function curve is shown in the figure
- the dashed curve in 6B shows:
- tone mapping function Aces can also refer to formula (6), and its function curve is shown in the solid line curve in Figure 6A
- c H is the original HDR color value
- c L is the LDR color value
- its corresponding high dynamic range space conversion function (Inverse Aces) can refer to formula (7)
- its function curve is the solid line curve in Figure 6B Shown:
- the LDRT needs to be fitted for different rendering effects of different HDRRs, so that the rendering effect of the HDRR is approached after rendering the unlit material based on the LDRT.
- their brightness ranges are actually different.
- the parameters of a single high dynamic range space conversion function are not the optimal solution, whether it corresponds to Reinhard's high dynamic range space conversion function or corresponds to Aces' high
- the parameters of the dynamic range space conversion function are fixed, and they cannot be well adapted to various rendering brightness ranges.
- Figure 7A- Figure 7B is a rendering diagram of the image rendering method provided by the embodiment of the present application, the color brightness range of the virtual object in Figure 7A is more concentrated in the dark part, and the color brightness of the virtual object in Figure 7B The range is more concentrated in the bright part, so ideally, the curve of their high dynamic range space transfer function should be as shown in Figure 6C, and the virtual object in Figure 7A should use the dotted curve shown in Figure 6C, that is, more
- the texture data range is assigned to the darker color space
- the dashed curve assigns the texture data range [0-0.9] to the brightness interval [0-0.2], and assigns less texture data range to the brighter color space
- the dashed curve assigns the texture
- the data range [0.9-1.0] gives the brightness interval [0.2-1.6], and the maximum brightness is controlled to be around 1.6.
- the virtual object in Figure 7B should use the solid line curve in Figure 6C, that is, allocate more texture data ranges to brighter color spaces, and the solid line curve allocates the texture data range [0.44-1.0] to the brightness range [0.2-4.34] ], allocate less texture data range to the darker color space, the solid line curve allocates the texture data range [0-0.44] to the brightness range [0-0.2], and controls the maximum brightness to about 4.34.
- This distribution of data allows for a higher accuracy range and best fit in the brighter areas of the rendered result.
- the first-order tone mapping function is shown in equation (8):
- a, b, c, and d are the parameters of the first-order tone mapping function
- c H is the original HDR color value
- c L is the LDR color value.
- the first-order high dynamic range space conversion function corresponding to the first-order tone mapping function is shown in formula (9):
- e, f, g, and h are the parameters of the first-order high dynamic range space conversion function
- c H is the original HDR color value
- c L is the LDR color value.
- p and q are the parameters that need to be fitted in the first-order high dynamic range space conversion function. Different p and q correspond to different curves, c H is the original HDR color value, and c L is the LDR color value, as shown in Figure 6D shows different curves corresponding to different p, q, the abscissa is the original HDR color value, and the ordinate is the LDR color value.
- Equation (11) the second-order tone mapping function is shown in equation (11):
- a, b, c, d, e, and f are the parameters of the second-order tone mapping function
- c H is the original HDR color value
- c L is the LDR color value.
- p, q, r, s, t, u, and v are the parameters of the second-order high dynamic range space conversion function
- c H is the original HDR color value
- c L is the LDR color value.
- p, q, r, s, and t are the parameters that need to be fitted for the second-order high dynamic range space conversion function. Different p, q, r, s, and t correspond to different curves, and c H is the original HDR Color value, c L is the LDR color value, different curves corresponding to different p, q are shown in Fig. 6E, the abscissa is the original HDR color value, and the ordinate is the LDR color value.
- the second-order high dynamic range space transfer function can give more curve shapes than the first-order high dynamic range space transfer function, and supports more brightness range compression methods.
- the calculation amount corresponding to the first-order high dynamic range space transfer function is less than Second-order high dynamic range spatial transfer function.
- the input part includes: 1) the original virtual object rendered by PBR, including a model 901, a material ball, and a lighting 902 (may include parallel light, point light, surface light source, etc.); During the integration process, the camera remains still.
- the output part includes: 1) automatically generated model UV2, which is used to map the texture space to the model rendering result; 2) a texture 1001 based on UV2 expansion, as shown in Figure 10, when the fitting is completed, the texture stored in this texture It is the baked LDRT data.
- the pixel parameters of this texture are represented by ⁇ in the following description; 3) The parameters of the optimal high dynamic range space conversion function (Inverse Tone Mapping) obtained by fitting.
- step 801 the scene and various parameters are initialized.
- step 802 the orientation of the character is set, specifically, the orientation of the virtual object (that is, the model) is set during the machine learning fitting process.
- the virtual object In the actual game running, the virtual object will not always be in the same orientation, so it is necessary to fit the rendering effect of each orientation of the character, so as to ensure that the fitting effect of the virtual object in each state is relatively accurate.
- the orientation of the virtual object in FIG. 13 there are three different orientations of the virtual object 1301 , and each iteration will randomly set the orientation of a virtual object.
- step 803 standard rendering processing is performed to obtain HDRR rendering results.
- the HDRR rendering result is a standard rendering effect that needs to be fitted. It is necessary to first perform PBR rendering with HDRT to obtain a rendering image of a certain orientation of the virtual object 1401 (ie, a standard rendering image) as shown in FIG. 14 .
- step 804 a fitting rendering process is performed to obtain a fitting rendering image including the virtual object.
- step 805 the rendering loss Loss is calculated.
- Img1 and Img2 represent the standard rendering image and the fitting rendering image respectively
- H and W represent the length and width of Img1 (or Img2) respectively
- (i, j) indicates any pixel in the screen space in the standard rendering image.
- step 806 it is judged whether the rendering loss is smaller than the threshold, if smaller, go to step 807 and step 808, if not, go to step 809.
- step 807 the gradient (Gradient) of the first texture data and the gradient (Gradient) of the conversion parameters are calculated.
- the gradient of the rendering loss Loss to the first texture data and the gradient of the conversion parameter can be calculated through the PyTorch framework and the differentiable rasterization renderer.
- step 808 update the first texture data and the conversion parameters with corresponding gradients, and continue to execute steps 802 to 805 .
- step 802 After calculating the gradient of the first texture data, use the optimizer of PyTorch to update the first texture data, after calculating the gradient of the conversion parameters, use the optimizer of PyTorch to update the conversion parameters, and then turn to step 802 to enter the next step One iteration, repeating the previous iterative process of step 802-step 807, the first texture data and the conversion parameters will gradually converge to the optimal value.
- step 809 the formats of the first texture data and conversion parameters are converted and output.
- the characterization fitting rendering image is very close to the standard rendering image, that is, the entire iterative process can be exited, the first texture data and conversion parameters can be saved, and UV2 can be used for sampling in the game. texture maps for close to HDRR rendering results.
- Step 804 is described in detail below, which includes 2 sub-steps and solves how to use the first texture data for Unlit rendering.
- formula (10) or formula (13) to convert the first texture data in LDR space to HDR space.
- the parameters of the formula are the default initial values.
- transition parameters the parameters (transition parameters) of the formula are updated based on the previous iteration.
- Gradient gradually Update to the optimal parameters.
- use the UV2 texture coordinates to render the sampled HDR space texture data into the world space to obtain the final screen space rendering result.
- Figure 17 is a schematic diagram of the loss effect of the image rendering method provided by the embodiment of the present application
- 1701 is a standard rendering image
- 1702 is the first texture data output using step 808, and then the high dynamic range space conversion function is obtained by fitting
- the target rendering image obtained by rendering is fitted to obtain the high dynamic range space conversion function as shown in Figure 18.
- the conversion parameters in the high dynamic range space conversion function are output in step 808, and 1703 is both the target rendering image and the standard rendering image It can be seen that the pixel difference value between the two is very small.
- the embodiment of the present application provides an image rendering method, which can automatically fit the rendering method of HDRR, and can fit LDRT to replace HDRT as the rendering basis, greatly reduce the amount of texture data, and can The overhead achieves a rendering effect close to HDRR, which greatly improves the frame rate of the game and reduces power consumption.
- the software modules stored in the image rendering device 455 of the memory 450 may include : An acquisition module 4551 configured to acquire the first texture data of the virtual object and the conversion parameters of the second texture data corresponding to the virtual object, wherein the data volume of the first texture data is smaller than the data volume of the second texture data, and the first The image information range of the texture data is smaller than the image information range of the second texture data; the fitting module 4552 is configured to perform fitting rendering processing based on the conversion parameters and the first texture data to obtain a fitting rendering image including the virtual object; the loss module 4553, configured to determine the rendering loss between the fitting rendering image and the standard rendering image, and update the conversion parameters and the first texture data based on the rendering loss, where the standard rendering image is obtained by performing standard rendering processing based on the second texture data, including The rendering image of the virtual object; the rendering module 4554 is configured to perform
- the fitting module 4552 is further configured to: based on the conversion parameters, perform space transformation processing on the first texture data of the virtual object toward the second texture data to obtain the third texture data, wherein the third texture data It is in the same dynamic range space as the second texture data; based on the third texture data, a fitting rendering process corresponding to the virtual object is performed to obtain a fitting rendering image including the virtual object.
- the first texture data includes a first color value of each texel
- the second texture data includes a second color value of each texel
- the conversion parameters include a first first-order parameter and a second first-order parameter
- the fitting module 4552 is also configured to: perform the following processing for each texel in the first texture data: determine the first product result between the first color value of the texel and the first first-order parameter; determine the first The product result and the first summation result of the second first-order parameter; the ratio between the first color value corresponding to the texel and the first summation result is determined as the texture pixel and the second color value are in the same dynamic range space the third color value; the third color values of multiple texels are combined to form the third texture data.
- the first texture data includes a first color value of each texel
- the second texture data includes a second color value of each texel
- the conversion parameters include a first second-order parameter, a second second-order parameter , the third second-order parameter, the fourth second-order parameter, and the fifth second-order parameter
- the fitting module 4552 is also configured to: perform the following processing for each texel in the first texture data: determine the first color of the texel the second product result between the square of the value and the first second-order parameter, and the third product result between the first color value of the texel and the second second-order parameter; for the second product result, the third product result, and The third second-order parameters are summed to obtain the second summation result; the fourth product result between the first color value of the texel and the fourth second-order parameter is determined; the square root of the second summation result, the fourth The product result and the square root of the third second-order parameter are summed to obtain the third summation
- the third texture data includes the third color value of each texel; the fitting module 4552 is further configured to: obtain the two-dimensional texture coordinates of the virtual object; obtain a differentiable rendering frame corresponding to the fitting rendering process ; Perform forward propagation of the two-dimensional texture coordinates and the third color value of each texel in the differentiable rendering framework to obtain a fitting rendering image including the virtual object.
- the loss module 4553 is further configured to: determine the overall pixel value difference between the standard rendering image and the fitting rendering image in screen space; based on the overall pixel value difference, the length of the fitting rendering image, and the Width, which determines the rendering loss.
- the loss module 4553 is further configured to: perform the following processing on any same pixel in the screen space of the fitting rendering image and the standard rendering image: determine the first pixel value of the corresponding pixel in the fitting rendering image, and Determining the second pixel value of the corresponding pixel in the standard rendering image; using the absolute value of the difference between the first pixel value and the second pixel value as the pixel value difference of the pixel; performing the pixel value difference of multiple pixels in the screen space The summation process is performed to obtain the overall pixel value difference.
- the loss module 4553 is further configured to: perform partial derivative processing on the first texture data based on the rendering loss to obtain the gradient corresponding to the first texture data; perform partial derivative processing on the transformation parameters based on the rendering loss to obtain the corresponding transformation Gradient of the parameter; updating the first texture data based on the gradient corresponding to the first texture data, and updating the transformation parameter based on the gradient corresponding to the transformation parameter.
- the loss module 4553 is further configured to: multiply the set learning rate by the gradient corresponding to the first texture data to obtain the data change value of the first texture data; Adding the first texture data to obtain updated first texture data; multiplying the set learning rate with the gradient of the corresponding conversion parameter to obtain the data change value of the conversion parameter; adding the data change value of the conversion parameter to the conversion parameter, Get updated transformation parameters.
- the rendering module 4554 is further configured to: when the rendering loss is less than the loss threshold, perform real-time rendering processing based on the updated conversion parameters and the updated first texture data; or when the number of updates reaches the threshold of update times, Real-time rendering processing is performed based on the updated conversion parameters and the updated first texture data.
- the rendering module 4554 is further configured to: based on the updated conversion parameters, perform space conversion processing on the updated first texture data toward the second texture data to obtain fourth texture data, wherein the fourth The texture data and the second texture data are in the same dynamic space range; determine at least one two-dimensional texture coordinate of the virtual object; perform the following processing for each two-dimensional texture coordinate: sample the fourth texture data and the two-dimensional texture coordinate Corresponding texture images, and fitting processing on the sampled texture images; Based on the fitting results of each two-dimensional texture coordinates, generate a target rendering image including the virtual object.
- An embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
- the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the above-mentioned image rendering method in the embodiment of the present application.
- An embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein executable instructions are stored, and when the executable instructions are executed by a processor, it will cause the processor to execute the image rendering method provided in the embodiment of the present application , for example, the image rendering method shown in FIGS. 3A-3C .
- the computer-readable storage medium can be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; Various equipment.
- executable instructions may take the form of programs, software, software modules, scripts, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and its Can be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
- executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in a Hyper Text Markup Language (HTML) document in one or more scripts, in a single file dedicated to the program in question, or in multiple cooperating files (for example, files that store one or more modules, subroutines, or sections of code).
- HTML Hyper Text Markup Language
- executable instructions may be deployed to be executed on one electronic device, or on multiple electronic devices located at one location, or, alternatively, on multiple electronic devices distributed across multiple locations and interconnected by a communication network. to execute.
- the first texture data and the second texture data are respectively rendered, and the first texture data is updated based on the loss between the rendering results, and the rendering is performed based on the first texture data.
- conversion parameters since the image range information of the second texture data is better than that of the first texture data, and the data volume of the first texture data is due to the second texture data, real-time image rendering is performed based on the updated first texture data and conversion parameters
- the rendering effect of the second texture data can be achieved, only consuming less storage space and computing resources, thereby effectively improving the utilization rate of rendering resources.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
本申请提供了一种图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品;方法包括:获取虚拟对象的第一纹理数据、以及对应虚拟对象的第二纹理数据的转换参数,基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图;确定拟合渲染图与标准渲染图之间的渲染损失,并基于渲染损失更新转换参数以及第一纹理数据,其中,标准渲染图是基于第二纹理数据进行标准渲染处理得到的包括虚拟对象的渲染图;基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括虚拟对象的目标渲染图。
Description
相关申请的交叉引用
本申请基于申请号为202210202833.0、申请日为2022年3月3日的中国专利申请提出,并要求中国专利申请的优先权,中国专利申请的全部内容在此引入本申请作为参考。
本申请涉及计算机图形图像技术,尤其涉及一种图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的显示技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在游戏等的虚拟场景中,能够模拟虚拟对象之间的真实的对战过程。
相关技术中,通常会对高动态范围的纹理资源进行压缩处理,从而减少纹理资源带来的带宽占用,但是基于压缩后的纹理资源进行渲染时难以达到较高的渲染效果,相关技术中尚无兼顾渲染资源以及渲染效果的方案。
发明内容
本申请实施例提供一种图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够利用不同渲染图之间的损失,基于较少的带宽占用的纹理数据实现较高的渲染效果,从而提高了渲染资源利用率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种图像渲染方法,包括:
获取虚拟对象的第一纹理数据、以及对应所述虚拟对象的第二纹理数据的转换参数,其中,所述第一纹理数据的数据量小于所述第二纹理数据的数据量,且所述第一纹理数据的图像信息范围小于所述第二纹理数据的图像信息范围;
基于所述转换参数以及所述第一纹理数据,进行拟合渲染处理,得到包括所述虚拟对象的拟合渲染图;
确定所述拟合渲染图与标准渲染图之间的渲染损失,并基于所述渲染损失更新所述转换参数以及所述第一纹理数据,其中,所述标准渲染图是基于所述第二纹理数据进行标准渲染处理得到的包括所述虚拟对象的渲染图;
基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括所述虚拟对象的目标渲染图。
本申请实施例提供一种图像渲染装置,包括:。
获取模块,配置为获取虚拟对象的第一纹理数据、以及对应所述虚拟对象的第二纹理数据的转换参数,其中,所述第一纹理数据的数据量小于所述第二纹理数据的数据量,且所述第一纹理数据的图像信息范围小于所述第二纹理数据的图像信息范围;
拟合模块,配置为基于所述转换参数以及所述第一纹理数据,进行拟合渲染处理,得到包括所述虚拟对象的拟合渲染图;
损失模块,配置为确定所述拟合渲染图与标准渲染图之间的渲染损失,并基于所述渲染损失更新所述转换参数以及所述第一纹理数据,其中,所述标准渲染图是基于所述第二纹理数据进行标准渲染处理得到的包括所述虚拟对象的渲染图;
渲染模块,配置为基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括所述虚拟对象的目标渲染图。
本申请实施例提供一种电子设备,包括:
存储器,用于存储计算机可执行指令;
处理器,用于执行所述存储器中存储的计算机可执行指令时,实现本申请实施例提供的图像渲染方法。
本申请实施例提供一种计算机可读存储介质,存储有计算机可执行指令,用于被处理器执行时,实现本申请实施例提供的图像渲染方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或计算机可执行指令,所述计算机程序或计算机可执行指令被处理器执行时实现本申请实施例提供的图像渲染方法。
本申请实施例具有以下有益效果:
通过对第一纹理数据以及第二纹理数据分别进行渲染处理,并基于渲染结果之间的损失更新第一纹理数据,并更新基于第一纹理数据进行渲染时所涉及到的转换参数,由于第二纹理数据的图像范围信息优于第一纹理数据,且第一纹理数据的数据量小于第二纹理数据,从而基于更新得到的第一纹理数据以及转换参数进行实时图像渲染时,可以仅消耗较少的存储空间以及计算资源,达到对标第二纹理数据的渲染效果,进而有效提高渲染资源利用率。
图1A-图1B是本申请实施例提供的图像渲染系统的架构示意图;
图2是本申请实施例提供的用于图像渲染的电子设备的结构示意图;
图3A-图3C是本申请实施例提供的图像渲染方法的流程示意图;
图4是本申请实施例提供的图像渲染方法的渲染示意图;
图5是本申请实施例提供的图像渲染方法的拟合渲染流程示意图;
图6A-图6E是本申请实施例提供的图像渲染方法的动态范围转换示意图;
图7A-图7B是本申请实施例提供的图像渲染方法的渲染示意图;
图8是本申请实施例提供的图像渲染方法的光栅化渲染流程示意图;
图9是本申请实施例提供的图像渲染方法的机器学习拟合场景示意图;
图10是本申请实施例提供的图像渲染方法的拟合纹理示意图;
图11是本申请实施例提供的图像渲染方法的机器学习拟合的流程示意图;
图12是本申请实施例提供的图像渲染方法的第一纹理数据的初始化值的示意图;
图13是本申请实施例提供的图像渲染方法的虚拟对象的屏幕空间渲染图;
图14是本申请实施例提供的图像渲染方法的虚拟对象的屏幕空间渲染图;
图15是本申请实施例提供的图像渲染方法的虚拟对象的拟合的屏幕空间渲染图;
图16是本申请实施例提供的图像渲染方法的标准渲染结果与拟合结果在屏幕空间的像素差异值的示意图;
图17是本申请实施例提供的图像渲染方法的损失效果示意图;
图18是本申请实施例提供的图像渲染方法的动态范围空间转换函数的拟合示意图。
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,所涉及的术语“第一\第二”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)客户端:终端中运行的用于提供各种服务的应用程序,例如视频播放客户端、游戏客户端等。
2)虚拟场景:游戏应用程序在终端上运行时显示(或提供)的虚拟游戏场景。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。
3)虚拟对象:虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,例如在虚拟场景中显示的人物、动物等。该虚拟对象可以是虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
4)图像渲染:将三维的光能传递处理转换为一个二维图像的过程。场景和物体用三维形式表示,更接近于现实世界,便于操纵和变换,而图像的显示设备大多是二维的光栅显示器和点阵化打印机。从三维场景的表示N维光栅和点阵化的表示就是图像渲染,即光栅化。光栅显示器可以看作是一个像素矩阵,在光栅显示器上显示的任何一个图像,实际上都是一些具有一种或多种颜色和灰度像素的集合。
5)色调映射(Tone Mapping):其作用是把高动态范围HDR的颜色映射为低动态范围LDR颜色,以便显示器可以正常显示,其对应的动态范围空间转换函数(Inverse Tone Mapping)的作用是把低动态范围LDR的颜色映射为高动态范围HDR颜色,还原得到原始亮度范围的颜色。
6)高动态范围(HDR,High-Dynamic Range),高动态范围的图像相比于普通图像可以提供更多的动态范围(例如,动态范围的宽度大于宽度阈值),并具有更高的细节程度(例如,细节程度超过细节程度阈值),根据不同的曝光时间的低动态范围(LDR,Low-Dynamic Range)的图像,利用每个曝光时间相对应最佳细节的LDR图像来合成最终HDR图像,能够更好的反映人真实环境中的视觉效果。
7)低动态范围(LDR,Low-Dynamic Range),动态范围的宽度不大于宽度阈值,并且细节程度不超过细节程度阈值,称为低动态范围,会导致高光或阴影中的细节丢失,在摄影中,动态范围以曝光值差异来衡量。
高动态范围渲染(HDRR,High Dynamic Range Rendering)与高动态范围贴图(HDRT,High Dynamic Range Texture)是游戏实时渲染中的重要技术,高动态范围贴图是具有高动态范围的贴图,两者结合所实现的渲染效果大幅超越低动态范围渲染(LDRR,Low Dynamic Range Rendering)与低动态范围贴图(LDRT,Low Dynamic Range Texture)两者结合所实现的渲染效果,低动态范围贴图是具有低动态范围的贴图,当游戏中的光照亮度范围比较大或者不同区域亮度差异明显时,渲染效果的差异越发明显。申请人在实施本申请实施例时发现虽然HDRR和HDRT的结合能提供更为逼真的渲染效果,但是HDRR和HDRT的结合所需要的渲染成本较高,例如,在未经压缩的HDRT中,每个通道需要32个比特位,从而每个像素(包括3个通道)需要96个比特位,然而在未经压缩的LDRT中,每个通道需要8个比特位,压缩完成之后甚至可以更小。申请人在实施本申请实施例时发现HDRR和HDRT不仅会给游戏带来包体占用和内存占用负担,在渲染的时候也会带来更多的带宽开销和计算负担,所以针对各个游戏,尤其是资源更为匮乏的手机端游戏,均会对HDRT进行压缩,并对HDRR进行计算优化,目的是尽量减少HDRT带来的包体占用以及带宽占用,减少HDRR的计算消耗,并尽量逼近原始HDRT和HDRR提供的渲染效果。
相关技术中提供了一种将HDRT压缩到LDRT的方法,通过计算LDRT和HDRT在贴图空间的差距值,然后使用Levenberg-Marquadt算法来求得使差异值最小的色调映射函数的转换参数,利用这个转换参数就可以把HDRT转换成LDRT进行打包封装。申请人在实施本申请实施例时发现相关技术中只考虑了贴图空间的纹理差异值,对于渲染来说应该考虑渲染结果之间的差异值,这样得到的结果会更为准确。
本申请实施例提供一种图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够利用不同渲染图之间的损失,基于较少的带宽占用的纹理数据实现较高的渲染效果,从而提高了渲染资源利用率。为便于更容易理解本申请实施例提供的图像渲染方法,首先说明本申请实施例提供的图像渲染方法的示例性实施场景,本申请实施例提供的图像渲染方法中的虚拟对象可以完全基于终端输出,或者基于终端和服务器协同输出。
在一些实施例中,虚拟场景可以是供游戏角色交互的环境,例如可以是供游戏角色在虚拟场景中进行对战,通过控制游戏角色的行动可以在虚拟场景中进行双方互动,从而使用户能够在游戏的过程中舒缓生活压力。
在一个实施场景中,参见图1A,图1A是本申请实施例提供的图像渲染系统的架构示意图,适用于一些完全依赖于终端400的图形处理硬件计算能力即可完成虚拟场景100的相关数据计算的应用模式,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等各种不同类型的终端400完成虚拟场景的输出。
作为示例,图形处理硬件的类型包括中央处理器(CPU,Central Processing Unit)和图形处理器(GPU,Graphics Processing Unit)。
当形成虚拟场景100的视觉感知时,终端400通过图形计算硬件计算显示所需要的数据,并完成显示数据的加载、解析和渲染,在图形输出硬件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,终端400还可以借助不同的硬件来形成听觉感知、触觉感知、运 动感知和味觉感知的一种或多种。
作为示例,终端400上运行有客户端410(例如单机版的游戏应用),在客户端410的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景可以是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;以第一人称视角显示虚拟场景100为例,在虚拟场景100中显示虚拟对象401,虚拟对象401可以是受用户(或称玩家)控制的游戏角色,将响应于真实用户针对按钮(包括摇杆按钮、攻击按钮、防御按钮等)的操作而在虚拟场景中操作,例如当真实用户向左移动摇杆按钮时,虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);虚拟对象401也可以是通过训练设置在虚拟场景对战中的人工智能(AI,Artificial Intelligence);虚拟对象401还可以是设置在虚拟场景互动中的非用户角色(NPC,Non-Player Character);虚拟对象401还可以是虚拟场景100中不可活动对象或者可活动对象。
举例来说,终端400为游戏开发人员所使用的终端,在游戏的开发打包阶段,终端400获取虚拟对象401的第一纹理数据、以及对应虚拟对象的第二纹理数据的转换参数,其中,第一纹理数据的数据量小于第二纹理数据的数据量,且第一纹理数据的图像信息范围小于第二纹理数据的图像信息范围;基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图;确定拟合渲染图与标准渲染图之间的渲染损失,并基于渲染损失更新转换参数以及第一纹理数据,其中,标准渲染图是基于第二纹理数据进行标准渲染处理得到的包括虚拟对象的渲染图,至此均是游戏开始前所执行的处理,在测试阶段,即游戏安装运行过程中,终端400基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括所述虚拟对象的目标渲染图,并基于目标渲染图进行虚拟场景的人机交互,例如游戏对抗。
在另一个实施场景中,参见图1B,图1B是本申请实施例提供的图像渲染系统的架构示意图,应用于终端400和服务器200,适用于依赖于服务器200的计算能力完成虚拟场景计算、并在终端400输出虚拟场景的应用模式。
以形成虚拟场景100的视觉感知为例,服务器200进行虚拟场景相关显示数据(例如场景数据)的计算并通过网络300发送到终端400,终端400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端400的相应硬件输出,例如使用麦克风形成听觉感知,使用振动器形成触觉感知等等。
作为示例,终端400上运行有客户端410(例如网络版的游戏应用),通过连接服务器200(例如游戏服务器)与其他用户进行游戏互动,终端400输出客户端410的虚拟场景100,以第一人称视角显示虚拟场景100为例,在虚拟场景100中显示虚拟对象401,虚拟对象401可以是受用户(或称玩家)控制的游戏角色,将响应于真实用户针对按钮(包括摇杆按钮、攻击按钮、防御按钮等)的操作而在虚拟场景中操作,例如当真实用户向左移动摇杆按钮时,虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);虚拟对象401也可以是通过训练设置在虚拟场景对战中的人工智能(AI,Artificial Intelligence);虚拟对象401还可以是设置在虚拟场景互动中的非用户角色(NPC,Non-Player Character);虚拟对象401还可以是虚拟场景100中不可活动对象或者可活动对象。
举例来说,服务器200获取虚拟对象401的第一纹理数据、以及对应虚拟对象的第二纹理数据的转换参数,其中,第一纹理数据的数据量小于第二纹理数据的数据量,且第一纹理数据的图像信息范围小于第二纹理数据的图像信息范围;基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图;确定拟合渲染图与标准渲染图之间的渲染损失,并基于渲染损失更新转换参数以及第一纹理数据,其中,标准渲染图是基于第二纹理数据进行标准渲染处理得到的包括虚拟对象的渲染图,至此均是游戏开始前所执行的处理,例如,游戏的开发打包阶段,终端400接收由服务器200发送的更新的第一纹理数据以及更新的转换参数(可以封装在安装包中),在游戏安装运行过程中,终端400基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括所述虚拟对象的目标渲染图,并基于目标渲染图进行虚拟场景的人机交互,例如游戏对抗。
在一些实施例中,终端400可以通过运行计算机程序来实现本申请实施例提供的图像渲染方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,APPlication),即需要在操作系统中安装才能运行的程序,例如换装游戏APP(即上述的客户端410);也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
以计算机程序为应用程序为例,在实际实施时,终端400安装和运行有支持虚拟场景的应用程序。该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击游戏、虚拟现实应用程序、三维地图程序或者多人枪战类生存游戏中的任意一种。用户使用终端400操作位于虚拟场景 中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷、建造虚拟建筑中的至少一种。示意性的,该虚拟对象可以是虚拟人物,比如仿真人物角色或动漫人物角色等。
在一些实施例中,本申请实施例还可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
示例的,图1B中的服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本发明实施例中不做限制。
参见图2,图2是本申请实施例提供的用于图像渲染的电子设备的结构示意图,图2所示的终端400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可选地包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他电子设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的图像渲染装置可以采用软件方式实现,图2示出了存储在存储器450中的图像渲染装置455,其可以是程序和插件等形式的软件,包括以下软件模块:获取模块4551、拟合模块4552、损失模块4553以及渲染模块4554,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。将在下文中说明各个模块的功能。
下面将结合附图对本申请实施例提供的图像渲染方法进行具体说明。本申请实施例提供的图像渲染方法可以由图1A中的终端400单独执行,也可以由图1B中的终端400和服务器200协同执行。
下面,以由图1A中的终端400单独执行本申请实施例提供的图像渲染方法为例进行说明。参见图3A,图3A是本申请实施例提供的图像渲染方法的流程示意图,将结合图3A示出的步骤101-步骤104进行说明。
需要说明的是,图3A示出的方法可以由终端400上运行的各种形式的计算机程序执行,并不局限于上述的客户端410,还可以是上文的操作系统451、软件模块和脚本,因此客户端不应视为对本申请实施例的限定。
在步骤101中,获取虚拟对象的第一纹理数据、以及对应虚拟对象的第二纹理数据的转换参数。
作为示例,转换参数指利用第二纹理数据进行渲染的过程中涉及到的参数,在渲染之前虚拟对象处于待渲染的状态,处于待渲染的状态的虚拟对象为基础对象模型,其中,基础对象模型包括虚拟对象的躯干,不包括用于装饰虚拟对象的展示信息,例如展示信息包括用于修饰虚拟对象面部的妆容(例如唇形、眼影、瞳孔、虹膜、腮红等)、用于修饰虚拟对象的四肢的服饰(例如古装、战斗服等)等。
作为示例,第一纹理数据可以是LDRT纹理,第二纹理数据可以是HDRT纹理,LDRT纹理包括每个纹理像素的第一颜色值,HDRT纹理包括每个纹理像素的第二颜色值,第一纹理数据的数据量小于第二纹理数据的数据量,在HDRT中,每个通道需要32个比特位,从而每个像素(包括3个通道)需要96个比特位,然而在LDRT中,每个通道需要8个比特位,从而每个像素(包括3个通道)需要24个比特位。第一纹理数据的图像信息范围小于第二纹理数据的图像信息范围,图像信息范围包括动态范围,动态范围是最高亮度与最低亮度的比值,HDRT的动态范围高于LDRT的动态范围。
作为示例,第一纹理数据可以是初始化的纹理图像中每个像素的像素参数,包括材质参数以及纹理参数,第一纹理数据还可以是通过迭代过程中第n-1次迭代得到的,从而将第n-1次迭代得到的第一纹理数据作为第n次迭代的输入,n为大于1的整数,每次迭代均需要执行后续步骤102以及步骤103。
需要说明的是,当第一纹理数据是初始化的纹理图像中每个像素的像素参数时,在步骤101之前,还需要初始化渲染场景,包括加载模型(基础对象模型)、纹理、材质等等,然后设置摄像机的位置、朝向,设置模型的位置、朝向,设置各种灯光的参数。
在步骤102中,基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图。
在一些实施例中,拟合渲染处理是对虚拟对象进行的,基于转换参数以及第一纹理数据对虚拟对象进行拟合渲染处理,得到包括虚拟对象的拟合渲染图。
需要说明的是,在实际的游戏运行中,虚拟对象不会一直处于同一个朝向,所以需要拟合虚拟对象各个朝向的渲染效果,这样才能保证虚拟对象各个朝向的拟合效果。以获取的第一纹理数据作为资源,用不受光材质(Unlit)的渲染方式对第一纹理数据进行拟合渲染处理,得到包括虚拟对象的拟合渲染图。
在一些实施例中,参见图3B,图3B是本申请实施例提供的图像渲染方法的流程示意图,步骤102中基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图,可以通过图3B中的步骤1021至步骤1022实现。
在步骤1021中,基于转换参数,对第一纹理数据进行朝向第二纹理数据的空间转换处理,得到与第二纹理数据处于相同动态范围空间的第三纹理数据。
在一些实施例中,第一纹理数据包括每个纹理像素的第一颜色值,第二纹理数据包括每个纹理像素的第二颜色值,转换参数包括第一一阶参数以及第二一阶参数;步骤1021中基于转换参数,对虚拟对象的第一纹理数据进行朝向第二纹理数据的空间转换处理,得到与第二纹理数据处于相同动态范围空间的第三纹理数据,可以通过以下技术方案实现:针对第一纹理数据中的每个纹理像素执行以下处理:确定纹理像素的第一颜色值与第一一阶参数之间的第一乘积结果;确定第一乘积结果与第二一阶参数的第一求和结果;将纹理像素的第一颜色值与第一求和结果之间的比值,确定为纹理像素的与第二颜色值处于相同动态范围空间的第三颜色值;将多个纹理像素的第三颜色值,组成第三纹理数据。通过空间转换处理,能够将第一纹理数据映射至对应第二纹理数据的动态范围空间,从而可以具有更宽的亮度范围,因此可以实现更为优秀的渲染效果。
作为示例,空间转换处理是通过一阶高动态范围空间转换函数实现的,一阶高动态范围空间转换函数参见公式(1):
其中,p,q均是一阶色调映射函数的需要拟合的参数,p为第一一阶参数,q为第二一阶参数,不同的p,q对应不同的曲线,c
H是第三颜色值,c
L是第一颜色值,图6D中示出了不同p,q对应的不同曲线,曲线的横坐标是第一颜色值,曲线的纵坐标是第三颜色值。
在一些实施例中,第一纹理数据包括每个纹理像素的第一颜色值,第二纹理数据包括每个纹理像素的第二颜色值,转换参数包括第一二阶参数、第二二阶参数、第三二阶参数、第四二阶参数以及第五二阶参数;步骤1021中基于转换参数,对虚拟对象的第一纹理数据进行朝向第二纹理数据的空间转换处理,得到第三纹理数据,可以通过以下技术方案实现:针对第一纹理数据中的每个纹理像素执行以下处理:确定纹理像素的第一颜色值的平方与第一二阶参数之间的第二乘积结果、以及纹理像素的第一颜色值与第二二阶参数之间的第三乘积结果;对第二乘积结果、第三乘积结果以及第三二阶参数进行求和处理,得到第二求和结果;确定纹理像素的第一颜色值与第四二阶参数之间的第四乘积结果;对第二求和结果的平方根、第四乘积结果以及第三二阶参数的平方根进行求和处理,得到第三求和结果;确定纹理像素的第一颜色值与第五二阶参数的第四求和结果;将第三求和结果与第四求和结果之间的比值,确定为纹 理像素的与第二颜色值处于相同动态范围空间的第三颜色值;将多个纹理像素的第三颜色值,组成第三纹理数据。通过空间转换处理,能够将第一纹理数据映射至对应第二纹理数据的动态范围空间,从而可以具有更宽的亮度范围,因此可以实现更为优秀的渲染效果。
作为示例,空间转换处理是通过二阶高动态范围空间转换函数实现的,一阶高动态范围空间转换函数参见公式(2):
其中,p,q,r,s,t均是二阶高动态范围空间转换函数的需要拟合的参数,p为第一二阶参数,q为第二二阶参数,r是第三二阶参数,s是第四二阶参数,t是第五二阶参数,不同的p,q,r,s,t的组合对应不同的曲线,c
H是第三颜色值,c
L是第一颜色值,图6E中示出了不同p,q,r,s,t的组合对应的不同曲线,曲线的横坐标是第一颜色值,曲线的纵坐标是第三颜色值。
在步骤1022中,基于第三纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图。
在一些实施例中,第三纹理数据包括每个纹理像素的第三颜色值;步骤1022中基于第三纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图,可以通过以下技术方案实现:获取虚拟对象的二维纹理坐标;获取对应拟合渲染处理的可微分渲染框架;将二维纹理坐标以及每个纹理像素的第三颜色值在可微分渲染框架中进行正向传播,得到包括虚拟对象的拟合渲染图。可微分渲染框架是将基于硬件的渲染过程进行软件化封装得到的,其中基于硬件的渲染过程可以是无受光材质的渲染方式,由于进行软件化封装,从而可微可导,因而后续可以基于梯度进行反向传播。
作为示例,获取虚拟对象的二维纹理坐标,二维纹理坐标是预先给定的或者自动化生成的,二维纹理坐标可以是UV2坐标,所有的图象文件都是二维平面。水平方向是U,垂直方向是V,通过二维平面可以定位图象上任意一个象素,它定义了图像上每个点的位置信息,这些点与模型是相互联系的,以决定表面纹理贴图的位置,UV坐标就是将图像上每个点精确对应到模型物体的表面,获取对应拟合渲染处理的可微分渲染框架,将二维纹理坐标以及每个纹理像素的第三颜色值在可微分渲染框架中进行正向传播,得到包括虚拟对象的拟合渲染图,参与正向传播的数据还包括材质、光照等渲染资源,参见图8,图8是本申请实施例提供的光栅化渲染流程示意图,改造后的整个光栅化渲染流程通过统一电子设备架构(CUDA,Compute Unified Device Architecture)来实现,并且作为机器学习库(例如PyTorch)的一个模块来运行,这里的CUDA即为上述可微分渲染框架,整个渲染过程是可以逆向传播梯度的,所以整个过程是可微的。将所有相关资源(Assets)通过可传递梯度的计算连接(Connection with Gradients),获取模型切线/法线数据(Compute Tangent/Normal)、生成光照图纹理坐标(Lightmap UV Generation)、计算动画和蒙皮(Animation&Skinning),将模型切线/法线数据、光照图纹理坐标、动画和蒙皮通过可传递梯度的计算连接(Connection with Gradients),获取裁剪空间中的坐标(Clip Space Pos),将所有相关资源(Assets)通过常规的计算连接(Ordinary Connection),获取三角形索引数据(Index Buffer),基于裁剪空间中的坐标以及三角形索引数据进行光栅化(Rasterization),分别得到坐标(u,v)以及
基于坐标(u,v)、
以及顶点属性数据(Vertex Attributes)进行插值(Interpolation)处理,得到插值后的属性数据(Interpolate Attributes)以及像素级属性数据的导数(Attribute Pixel Derivatives),基于差值后的属性数据、像素级属性数据的导数以及纹理(Texture)进行纹理采样(Texture Sampling)处理,得到过滤后的采样数据(Filtered Samples),基于光照(Lights Camera)、材质参数(Material Paremeters)、插值后的属性数据以及过滤后的采样数据进行待拟合的目标渲染(GroundTruth Shading),得到有锯齿的渲染图像(Aliased Image),对有锯齿的渲染图像进行抗锯齿(Antialiasing)处理,得到最终拟合渲染图(Final Image)。
在步骤103中,确定拟合渲染图与标准渲染图之间的渲染损失,并基于渲染损失更新转换参数以及第一纹理数据。
作为示例,标准渲染图中虚拟对象的朝向与迭代过程中拟合渲染时设置的朝向相同,标准渲染图是基于第二纹理数据进行标准渲染处理得到的包括虚拟对象的渲染图。标准渲染处理可以是基于物理的渲染处理,具体而言,基于虚拟对象的第二纹理数据,进行对应虚拟对象的基于物理的渲染处理,得到标准渲染图。例如,通过基于物理规律的渲染技术(PBR,Physically Based Rendering)加载虚拟对象的第二纹理数据,并基于物理规律对第二纹理数据进行渲染处理,得到符合物理规律的标准渲染图,这种渲染方式可以带来最优的渲染效果,但是代价是需要付出较大的存储资源以及计算资源。
在一些实施例中,参见图3C,图3C是本申请实施例提供的图像渲染方法的流程示意图,步骤103中确定拟合渲染图与标准渲染图之间的渲染损失,可以通过图3C中的步骤1031至步骤1032实现。
在步骤1031中,确定标准渲染图与拟合渲染图在屏幕空间的整体像素值差异。
在一些实施例中,步骤1031中确定标准渲染图与拟合渲染图在屏幕空间的整体像素值差异,可以通 过以下技术方案实现:针对拟合渲染图以及标准渲染图在屏幕空间中任一相同像素执行以下处理:确定拟合渲染图中对应像素的第一像素值,并确定标准渲染图中的对应像素的第二像素值;将第一像素值与第二像素值之间的差值的绝对值作为对应像素的像素值差异;对与多个像素对应的像素值差异进行求和处理,得到整体像素值差异。以每个像素作为差异衡量的最小单位,可以有效提高渲染损失的价值,从而基于渲染损失进行更新时,可以得到具有更优渲染效果的转换参数以及第一纹理数据。
在一些实施例中,针对拟合渲染图以及标准渲染图在屏幕空间中每个相同像素执行以下处理:确定拟合渲染图中对应像素的第一像素值,并确定标准渲染图中的对应像素的第二像素值;将第一像素值与第二像素值之间的差值的绝对值作为对应像素的像素值差异;对与多个像素对应的像素值差异进行求和处理,得到整体像素值差异。
在步骤1032中,基于整体像素值差异、拟合渲染图的长度以及拟合渲染图的宽度,确定渲染损失。
作为示例,渲染损失是基于公式(3)得到的:
其中,Img1和Img2分别表示标准渲染图与拟合渲染图,H和W分别表示Img1(或Img2)的长宽,
表示标准渲染图与拟合渲染图在屏幕空间的像素值差异,(i,j)表示标准渲染图中在屏幕空间中任一像素。
需要说明的是,本申请实施例的渲染损失并不局限于公式(3),还可以是其他的变形公式。
在一些实施例中,步骤103中于渲染损失更新转换参数以及第一纹理数据,可以通过以下技术方案实现:基于渲染损失对第一纹理数据进行偏导处理,得到对应第一纹理数据的梯度;基于渲染损失对转换参数进行偏导处理,得到对应转换参数的梯度;基于对应第一纹理数据的梯度,更新第一纹理数据,并基于对应转换参数的梯度,更新转换参数。
在一些实施例中,上述基于第一纹理数据的梯度,更新第一纹理数据,可以通过以下技术方案实现:将设定学习率与对应第一纹理数据的梯度相乘,得到第一纹理数据的数据变化值;将第一纹理数据的数据变化值与第一纹理数据相加,得到更新的第一纹理数据;上述基于对应转换参数的梯度,更新转换参数,可以通过以下技术方案实现:将设定学习率与对应转换参数的梯度相乘,得到转换参数的数据变化值;将转换参数的数据变化值与转换参数相加,得到更新的转换参数。
需要说明的是,基于渲染损失更新第一纹理数据的过程类似于机器学习的反向传播过程,将第一纹理数据输入到可微分光栅化渲染器,通过可微分光栅化渲染器的正向渲染过程,输出拟合渲染图,由于可微分光栅化渲染器的输出结果(拟合渲染图)与标准渲染图有误差(渲染损失),则计算输出结果与标准渲染图之间的误差,并基于误差进行反向传播,反向传播是基于梯度下降算法实现的,在反向传播的过程中,根据误差调整第一纹理数据以及转换参数,例如,求出渲染损失对第一纹理数据的偏导数,生成渲染损失对第一纹理数据的梯度,由于梯度的方向表明误差扩大的方向,采用梯度下降算法更新第一纹理数据,求出渲染损失对转换参数的偏导数,生成渲染损失对转换参数的梯度,由于梯度的方向表明误差扩大的方向,采用梯度下降算法更新转换参数,不断迭代上述过程,直至满足迭代结束条件。
在步骤104中,基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括虚拟对象的目标渲染图。
在一些实施例中,步骤104中基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,可以通过以下技术方案实现:当渲染损失小于损失阈值时,基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理;或者当更新次数达到更新次数阈值时,基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理。
作为示例,迭代结束条件包括以下至少之一:渲染损失函数的值小于损失阈值;迭代次数达到设定次数。当满足迭代结束条件时,将更新的转换参数以及更新的第一纹理数据作为最终保存的用于进行实施渲染处理的渲染基础资源,当不限定迭代次数时,通过设定损失阈值来界定目标渲染图与标准渲染图之间的相似程度,当损失阈值越小时,目标渲染图与标准渲染图之间的越相似;当不限定损失阈值时,通过设定迭代次数来界定目标渲染图与标准渲染图之间的相似程度,当设定的迭代次数越大时,目标渲染图与标准渲染图之间的越相似。通过损失阈值或者更新次数来判断是否能够基于当前更新的第一纹理数据以及转换参数进行实时渲染,可以提高拟合效率,防止浪费拟合资源。
在一些实施例中,步骤104中基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括虚拟对象的目标渲染图,可以通过以下技术方案实现:基于更新的转换参数,对更新的第一纹理数据进行朝向第二纹理数据的空间转换处理,得到与第二纹理数据处于相同动态范围空间的第四纹理数据;确定虚拟对象的至少一个二维纹理坐标;针对每个二维纹理坐标执行以下处理:从第四纹理数据中采样出与二维纹理坐标对应的纹理图像,并对采样得到的纹理图像进行贴合处理;基于每个二维纹理坐标的贴合结果,生成包括虚拟对象的目标渲染图。
作为示例,保存更新的第一纹理数据到纹理图像中,获得纹理图像后,在游戏过程中就可以利用该纹理图像进行实时图像渲染,避免加载多种纹理,节约了相关的存储空间以及计算资源,进而提高虚拟对象的渲染效率。
作为示例,获取虚拟对象的二维纹理坐标,二维纹理坐标是预先给定的或者自动化生成的,二维纹理坐标可以是UV2坐标,UV2坐标定义了图像上每个点的位置信息,这些点与模型是相互联系的,以决定表面纹理贴图的位置,UV坐标就是将图像上每个点精确对应到模型物体的表面,通过UV2来采样第四纹理数据,将采样得到的第四纹理数据构成的纹理贴图贴在虚拟对象的基础模型上,从而实现实时渲染过程。
通过本申请实施例对第一纹理数据以及第二纹理数据分别进行渲染处理,并基于渲染结果之间的损失更新第一纹理数据以及基于第一纹理数据进行渲染时所涉及到的转换参数,由于第二纹理数据的图像范围信息优于第一纹理数据,且第一纹理数据的数据量由于第二纹理数据,从而基于更新得到的第一纹理数据以及转换参数进行实时图像渲染时,可以达到对标第二纹理数据的渲染效果,仅消耗较少的存储空间以及计算资源,进而有效提高渲染资源利用率。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
本申请实施例可以应用于各种游戏的渲染场景,例如对抗游戏、赛车游戏、变装游戏等。在一些实施例中,服务器获取虚拟对象的第一纹理数据、以及对应虚拟对象的第二纹理数据的转换参数,其中,第一纹理数据的数据量小于第二纹理数据的数据量,且第一纹理数据的图像信息范围小于第二纹理数据的图像信息范围;基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图;确定拟合渲染图与标准渲染图之间的渲染损失,并基于渲染损失更新转换参数以及第一纹理数据,标准渲染图是基于第二纹理数据进行标准渲染处理得到的包括虚拟对象的渲染图,至此均是游戏开始前所执行的处理,例如,游戏的开发打包阶段,终端接收由服务器发送的更新的第一纹理数据以及更新的转换参数(可以封装在安装包中),在游戏安装运行过程中,终端基于更新的转换参数以及更新的第一纹理数据,对所述虚拟对象进行实时渲染处理,得到包括所述虚拟对象的目标渲染图,并基于目标渲染图进行虚拟场景的人机交互,例如游戏对抗。需要说明的是,本申请实施例适用于大部分手游项目,尤其能满足对渲染性能和渲染效果都要求较高的手游项目,本方案能大幅提高渲染效率,并做到和原始效果差异很小。
参见图4,图4是本申请实施例提供的渲染示意图,本申请实施例的游戏中的渲染效果只需要一张纹理和对应的UV2顶点数据即可,可以看作是将拟合出来的纹理401采用第2个纹理坐标(UV2)贴在模型402(即待渲染的虚拟对象)上。
下面具体说明本申请实施例提供的基于可微分渲染的计算框架和机器学习拟合算法。参见图5,图5是本申请实施例提供的拟合渲染的流程示意图,如图5所示,渲染过程是一个非常复杂的函数f,它的输入是x,x是参数集(待拟合参数),包含了诸如模型顶点位置、材质参数、纹理参数等数据,f的输出y就是渲染结果。
本申请实施例不会去优化顶点位置等数据,x中只有第一纹理数据以及转换参数需要进行优化,目标是在给定HDRR渲染结果的情况下,计算x来使得f(x)尽量逼近HDRR渲染结果,逼近的衡量程度用损失函数(Loss)来计算。通过以下公式
计算拟合渲染图对待拟合参数的偏微分,通过以下公式
计算损失函数对拟合渲染图的偏微分,通过以下公式
计算损失函数对待拟合参数的偏微分。
优化算法采用梯度下降算法。不过f是一个非常复杂的函数,正常的渲染流程是不可导的,所以为了用梯度下降算法来求最优的x需要f可导,本申请实施例提供一个可求导(即可微分)的光栅化渲染框架。需要说明的是,f是完整的光栅化渲染流程,为了让其可微,本申请实施例对光栅化渲染进行了改进,参见图8,改造后的整个光栅化渲染流程通过统一电子设备架构(CUDA,Compute Unified Device Architecture)来实现,并且作为机器学习库(例如PyTorch)的一个模块来运行,整个渲染过程是可以逆向传播梯度的,所以整个过程是可微的。将所有相关资源(Assets)通过可传递梯度的计算连接(Connection with Gradients),获取模型切线/法线数据(Compute Tangent/Normal)、生成光照图纹理坐标(Lightmap UV Generation)、计算动画和蒙皮(Animation&Skinning),将模型切线/法线数据、光照图纹理坐标、动画和蒙皮通过可传递梯度的计算连接(Connection with Gradients),获取裁剪空间中的坐标(Clip Space Pos),将所有相关资源(Assets)通过常规的计算连接(Ordinary Connection),获取三角形索引数据(Index Buffer),基于裁剪空间中的坐标以及三角形索引数据进行光栅化(Rasterization),分别得到坐标(u,v)以及
基于坐标(u,v)、
以及顶点属性数据(Vertex Attributes)进行插值(Interpolation)处理,得到插值后的属性数据(Interpolate Attributes)以及像素级属性数据的导数(Attribute Pixel Derivatives), 基于差值后的属性数据、像素级属性数据的导数以及纹理(Texture)进行纹理采样(Texture Sampling)处理,得到过滤后的采样数据(Filtered Samples),基于光照(Lights Camera)、材质参数(Material Paremeters)、插值后的属性数据以及过滤后的采样数据进行待拟合的目标渲染(GroundTruth Shading),得到有锯齿的渲染图像(Aliased Image),对有锯齿的渲染图像进行抗锯齿(Antialiasing)处理,得到最终的渲染图像(Final Image)。
下面具体说明本申请实施例提供的图像渲染方法中高动态范围空间转换函数的应用原理。为了减少包体占用以及内存占用,最终存放的纹理数据是LDRT,存放范围是[0-1]之间的数据,在渲染的时候会将其转换到高动态范围空间,然后基于转换得到的数据进行渲染,高动态范围空间的数据范围更广,包含了更多的人类视觉系统能感知的数据,这个转换的过程所使用的函数是高动态范围空间转换函数,可以看作是色调映射的逆过程。
作为示例,色调映射函数Reinhard可以参见公式(4),它的函数曲线如图6A中的虚线曲线所示:
其中,c
H是高动态范围空间的颜色值,c
L是低动态范围空间的颜色值,它相应的高动态范围空间转换函数(Inverse Reinhard)可以参见公式(5),它的函数曲线如图6B中的虚线曲线所示:
作为示例,色调映射函数Aces还可以参见公式(6),它的函数曲线如图6A中的实线曲线所示
其中,c
H是原始的HDR颜色值,c
L是LDR颜色值,它相应的高动态范围空间转换函数(Inverse Aces)可以参见公式(7),它的函数曲线如图6B中的实线曲线所示:
本申请实施例中针对不同HDRR的不同渲染效果均需要拟合出LDRT,使得基于LDRT进行不受光材质渲染之后逼近HDRR的渲染效果。对于不同的渲染效果来说,它们的亮度范围其实是不一样的,单一的高动态范围空间转换函数的参数均不是最优解,不论是对应Reinhard的高动态范围空间转换函数还是对应Aces的高动态范围空间转换函数,它们的参数都是固定的,并不能很好的适配各种不同的渲染亮度范围。
参见图7A-图7B,图7A-图7B是本申请实施例提供的图像渲染方法的渲染示意图,图7A的虚拟对象的颜色亮度范围更多的集中于暗部,图7B的虚拟对象的颜色亮度范围更多的集中于亮部,所以理想情况下它们的高动态范围空间转换函数的曲线应该是图6C所示,图7A中虚拟对象应该使用图6C示出的虚线曲线,即分配较多的纹理数据范围给较暗的颜色空间,虚线曲线分配纹理数据范围[0-0.9]给亮度区间[0-0.2],并分配较少的纹理数据范围给较亮的颜色空间,虚线曲线分配了纹理数据范围[0.9-1.0]给亮度区间[0.2-1.6],并且控制最大亮度为1.6左右。这样的数据分配可以让渲染结果较暗的区域实现较高的精度范围以及最佳拟合效果。图7B中虚拟对象应该使用图6C中的实线曲线,即分配较多的纹理数据范围给较亮的颜色空间,实线曲线分配了纹理数据范围[0.44-1.0]给亮度区间[0.2-4.34],分配较少的纹理数据范围给较暗的颜色空间,实线曲线分配了纹理数据范围[0-0.44]给亮度区间[0-0.2],并且控制最大亮度为4.34左右。这样的数据分配可以让渲染结果较亮的区域实现较高的精度范围以及最佳拟合效果。
通过上述可知若想实现最佳拟合效果,高动态范围空间转换函数的曲线都需要定制,而且需要自动拟合得到,申请人在实施本申请实施例时从色调映射函数Reinhard和色调映射函数Aces的公式中得到启发,实现了如下两种形式的高动态范围空间转换函数的参数自动拟合,并且本申请实施例提供的图像渲染方法不仅仅局限于下面形式的高动态范围空间转换函数,其他游戏可以根据需求的不同和游戏计算量开销的不同进行个性化配置。
作为示例,一阶色调映射函数如公式(8)所示:
其中,a,b,c,d均是一阶色调映射函数的参数,c
H是原始的HDR颜色值,c
L是LDR颜色值。
作为示例,一阶色调映射函数对应的一阶高动态范围空间转换函数如公式(9)所示:
其中,e,f,g,h均是一阶高动态范围空间转换函数的参数,c
H是原始的HDR颜色值,c
L是LDR颜色值。
通过后续的机器学习拟合算法,对于HDRR的每个渲染效果都会拟合出来最优的e,f,g,h参数,并且,由于c
L为0的时候c
H也为0,所以f参数恒为0,再进行化简可以得到公式(10):
其中,p,q均是一阶高动态范围空间转换函数的需要拟合的参数,不同的p,q对应不同的曲线,c
H是原始的HDR颜色值,c
L是LDR颜色值,图6D中示出了不同p,q对应的不同曲线,横坐标是原始的HDR颜色值,纵坐标是LDR颜色值。
作为示例,二阶色调映射函数如公式(11)所示:
其中,a,b,c,d,e,f均是二阶色调映射函数的参数,c
H是原始的HDR颜色值,c
L是LDR颜色值。
作为示例,二阶色调映射函数对应的二阶高动态范围空间转换函数如公式(12)所示:
其中,p,q,r,s,t,u,v均是二阶高动态范围空间转换函数的参数,c
H是原始的HDR颜色值,c
L是LDR颜色值。
通过后续的机器学习拟合算法,对于HDRR的每个渲染效果都会拟合出来最优的p,q,r,s,t,u,v参数。另外因为c
L为0的时候c
H也为0,所以u,v参数恒为0,再进行化简可以得到公式(13):
其中,p,q,r,s,t均是二阶高动态范围空间转换函数的需要拟合的参数,不同的p,q,r,s,t对应不同的曲线,c
H是原始的HDR颜色值,c
L是LDR颜色值,图6E中示出了不同p,q对应的不同曲线,横坐标是原始的HDR颜色值,纵坐标是LDR颜色值。
二阶高动态范围空间转换函数能给出比一阶高动态范围空间转换函数更多的曲线形状,支持更多的亮度范围的压缩方式,一阶高动态范围空间转换函数所对应的计算量小于二阶高动态范围空间转换函数。
接下来会详细介绍如何使用可微渲染框架和机器学习算法来拟合出最优的转换参数以及第一纹理数据(LDRT)。
如图9所示,输入部分包括:1)PBR渲染的原始虚拟对象,包括模型901、材质球、光照902(可以包括平行光、点光、面光源等);2)摄像机903,在整个拟合过程中,摄像机保持不动。
输出部分包括:1)自动化生成的模型UV2,用来映射纹理空间到模型渲染结果;2)一张基于UV2展开的纹理1001,如图10所示,当拟合结束时这张纹理里面存放的就是烘焙出来的LDRT数据,其中,这张纹理的像素参数在下面描述中用φ来表示;3)拟合得到的最优的高动态范围空间转换函数(Inverse Tone Mapping)的参数。
如图11所示的完整的拟合过程,在给定的输入输出条件下,完整的拟合过程主要包含8个步骤,下面将具体说明每个步骤:
在步骤801中,初始化场景及各项参数。
在进行机器学习拟合之前,先初始化场景,包括加载模型、纹理、材质等等,然后设置相机的位置、朝向(不优化),设置模型的位置、朝向,设置各种灯光的参数(不优化),自动化生成的模型UV2。另外需要初始化纹理图每个像素的数据(第一纹理数据)以及高动态范围空间转换函数(Inverse Tone Mapping)的参数θ。一般可以初始化为如图12所示的灰色常量。其中,在机器学习拟合的过程中不优化相机的位置、朝向、灯光的参数(不优化)。
在步骤802中,设置角色的朝向,具体而言,在机器学习拟合的过程中设置虚拟对象(即模型)的朝向。
在实际的游戏运行中,虚拟对象不会一直处于同一个朝向,所以需要拟合角色各个朝向的渲染效果,这样才能保证虚拟对象各个状态下的拟合效果都是比较准确的。如图13所示的虚拟对象的朝向设置实例,虚拟对象1301的三个不同朝向,在每一次迭代的时候都会随机设置一个虚拟对象的朝向。
在步骤803中,进行标准渲染处理,得到HDRR渲染结果。
HDRR渲染结果是需要进行拟合的标准渲染效果,需要首先用HDRT进行PBR渲染,得到如图14所示的虚拟对象1401某个朝向的渲染图(即标准渲染图)。
在步骤804中,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图。
然后采用同样的虚拟对象朝向,以第一纹理数据作为资源,用不受光材质(Unlit)的渲染方式进行渲染,得到如图15所示的相同朝向的渲染图(即拟合渲染图)。
在步骤805中,计算渲染损失Loss。
为了让拟合渲染图逼近标准渲染图,需要对拟合纹理进行修改,首先计算拟合渲染图与标准渲染图之间差距,图16为两者在屏幕空间的像素差异值,本申请实施例采用L1Loss函数,如公式(14)所示:
其中,Img1和Img2分别表示标准渲染图与拟合渲染图,H和W分别表示Img1(或Img2)的长宽,
表示标准渲染图与拟合渲染图在屏幕空间的像素值差异,(i,j)表示标准渲染 图中在屏幕空间中任一像素。
在步骤806中,判断渲染损失是否小于阈值,如果小于,转入步骤807和步骤808,如果不小于,转入步骤809。
在步骤807中,计算第一纹理数据的梯度(Gradient)以及转换参数的梯度(Gradient)。
在计算Loss之后,就可以通过PyTorch框架以及可微分光栅化渲染器计算出渲染损失Loss对第一纹理数据的梯度以及对转换参数的梯度。
在步骤808中,用相应的梯度分别更新第一纹理数据以及转换参数,并继续执行步骤802至步骤805。
在计算第一纹理数据的梯度后,使用PyTorch的优化器对第一纹理数据进行更新,在计算转换参数的梯度后,使用PyTorch的优化器对转换参数进行更新,然后转入步骤802以进入下一次迭代,不停地重复前面步骤802-步骤807的迭代过程,第一纹理数据以及对转换参数就会逐渐收敛逼近到最佳值。
在步骤809中,将第一纹理数据以及转换参数的格式进行转换并输出。
当渲染损失Loss的值小于损失阈值时,表征拟合渲染图已经非常接近标准渲染图,即可以退出整个迭代过程,保存第一纹理数据以及转换参数,在游戏中就可以采用UV2来采样所得到的纹理图,以获取接近HDRR的渲染结果。
下面详细介绍步骤804,它包括2个子步骤,解决了如何使用第一纹理数据来进行Unlit渲染,首先,使用公式(10)或者公式(13)将LDR空间的第一纹理数据转换到HDR空间,算法刚开始的时候,公式的参数是默认的初始值,后来在每次迭代的过程中,公式的参数(转换参数)均是基于前一次迭代更新得到的,在算法迭代的过程中通过Gradient逐步更新到最优的参数。接着,使用UV2纹理坐标将采样得到的HDR空间的纹理数据渲染到世界空间中得到最后的屏幕空间渲染结果。
参见图17,图17是本申请实施例提供的图像渲染方法的损失效果示意图,1701是标准渲染图,1702是使用步骤808输出的第一纹理数据,然后通过拟合得到高动态范围空间转换函数进行渲染得到的目标渲染图,拟合得到高动态范围空间转换函数如图18所示,高动态范围空间转换函数中的转换参数是步骤808输出的,1703是目标渲染图与标准渲染图两者之间的像素差异值,可以看出两者之间的像素差异值是很小的。
本申请实施例提供了一种图像渲染方法,能够全自动对HDRR这种渲染方式进行拟合,并且能够拟合得到LDRT,以代替HDRT作为渲染基础,大大降低纹理数据量,能以很低的开销实现逼近HDRR的渲染效果,大幅提升了游戏的帧率,降低了对电量的消耗。
可以理解的是,在本申请实施例中,涉及到用户信息等相关的数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
下面继续说明本申请实施例提供的图像渲染装置455的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器450的图像渲染装置455中的软件模块可以包括:获取模块4551,配置为获取虚拟对象的第一纹理数据、以及对应虚拟对象的第二纹理数据的转换参数,其中,第一纹理数据的数据量小于第二纹理数据的数据量,且第一纹理数据的图像信息范围小于第二纹理数据的图像信息范围;拟合模块4552,配置为基于转换参数以及第一纹理数据,进行拟合渲染处理,得到包括虚拟对象的拟合渲染图;损失模块4553,配置为确定拟合渲染图与标准渲染图之间的渲染损失,并基于渲染损失更新转换参数以及第一纹理数据,其中,标准渲染图是基于第二纹理数据进行标准渲染处理得到的包括虚拟对象的渲染图;渲染模块4554,配置为基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括虚拟对象的目标渲染图。
在一些实施例中,拟合模块4552,还配置为:基于转换参数,对虚拟对象的第一纹理数据进行朝向第二纹理数据的空间转换处理,得到第三纹理数据,其中,第三纹理数据与第二纹理数据处于相同动态范围空间;基于第三纹理数据,进行对应虚拟对象的拟合渲染处理,得到包括虚拟对象的拟合渲染图。
在一些实施例中,第一纹理数据包括每个纹理像素的第一颜色值,第二纹理数据包括每个纹理像素的第二颜色值,转换参数包括第一一阶参数以及第二一阶参数;拟合模块4552,还配置为:针对第一纹理数据中的每个纹理像素执行以下处理:确定纹理像素的第一颜色值与第一一阶参数之间的第一乘积结果;确定第一乘积结果与第二一阶参数的第一求和结果;将纹理像素对应的第一颜色值与第一求和结果之间的比值,确定为纹理像素的与第二颜色值处于相同动态范围空间的第三颜色值;将多个纹理像素的第三颜色值,组成第三纹理数据。
在一些实施例中,第一纹理数据包括每个纹理像素的第一颜色值,第二纹理数据包括每个纹理像素的第二颜色值,转换参数包括第一二阶参数、第二二阶参数、第三二阶参数、第四二阶参数以及第五二阶参数;拟合模块4552,还配置为:针对第一纹理数据中的每个纹理像素执行以下处理:确定纹理像素的第一颜色值的平方与第一二阶参数之间的第二乘积结果、以及纹理像素的第一颜色值与第二二阶参数之间的第三乘积结果;对第二乘积结果、第三乘积结果以及第三二阶参数进行求和处理,得到第二求和 结果;确定纹理像素的第一颜色值与第四二阶参数之间的第四乘积结果;对第二求和结果的平方根、第四乘积结果以及第三二阶参数的平方根进行求和处理,得到第三求和结果;确定纹理像素的第一颜色值与第五二阶参数的第四求和结果;将第三求和结果与第四求和结果之间的比值,确定为纹理像素的与第二颜色值处于相同动态范围空间的第三颜色值;将多个纹理像素的第三颜色值,组成第三纹理数据。
在一些实施例中,第三纹理数据包括每个纹理像素的第三颜色值;拟合模块4552,还配置为:获取虚拟对象的二维纹理坐标;获取对应拟合渲染处理的可微分渲染框架;将二维纹理坐标以及每个纹理像素的第三颜色值在可微分渲染框架中进行正向传播,得到包括虚拟对象的拟合渲染图。
在一些实施例中,损失模块4553,还配置为:确定标准渲染图与拟合渲染图在屏幕空间的整体像素值差异;基于整体像素值差异、拟合渲染图的长度以及拟合渲染图的宽度,确定渲染损失。
在一些实施例中,损失模块4553,还配置为:针对拟合渲染图以及标准渲染图在屏幕空间中任一相同像素执行以下处理:确定拟合渲染图中对应像素的第一像素值,并确定标准渲染图中对应像素的第二像素值;将第一像素值与第二像素值之间的差值的绝对值作为像素的像素值差异;对屏幕空间中多个像素的像素值差异进行求和处理,得到整体像素值差异。
在一些实施例中,损失模块4553,还配置为:基于渲染损失对第一纹理数据进行偏导处理,得到对应第一纹理数据的梯度;基于渲染损失对转换参数进行偏导处理,得到对应转换参数的梯度;基于对应第一纹理数据的梯度,更新第一纹理数据,并基于对应转换参数的梯度,更新转换参数。
在一些实施例中,损失模块4553,还配置为:将设定学习率与对应第一纹理数据的梯度相乘,得到第一纹理数据的数据变化值;将第一纹理数据的数据变化值与第一纹理数据相加,得到更新的第一纹理数据;将设定学习率与对应转换参数的梯度相乘,得到转换参数的数据变化值;将转换参数的数据变化值与转换参数相加,得到更新的转换参数。
在一些实施例中,渲染模块4554,还配置为:当渲染损失小于损失阈值时,基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理;或者当更新次数达到更新次数阈值时,基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理。
在一些实施例中,渲染模块4554,还配置为:基于更新的转换参数,对更新的第一纹理数据进行朝向第二纹理数据的空间转换处理,得到第四纹理数据,其中,所述第四纹理数据与所述第二纹理数据处于相同动态空间范围;确定虚拟对象的至少一个二维纹理坐标;针对每个二维纹理坐标执行以下处理:从第四纹理数据中采样出与二维纹理坐标对应的纹理图像,并对采样得到的纹理图像进行贴合处理;基于每个二维纹理坐标的贴合结果,生成包括虚拟对象的目标渲染图。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的图像渲染方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的图像渲染方法,例如,如图3A-3C示出的图像渲染方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备上执行。
综上所述,通过本申请实施例对第一纹理数据以及第二纹理数据分别进行渲染处理,并基于渲染结果之间的损失更新第一纹理数据以及基于第一纹理数据进行渲染时所涉及到的转换参数,由于第二纹理数据的图像范围信息优于第一纹理数据,且第一纹理数据的数据量由于第二纹理数据,从而基于更新得到的第一纹理数据以及转换参数进行实时图像渲染时,可以达到对标第二纹理数据的渲染效果,仅消耗较少的存储空间以及计算资源,进而有效提高渲染资源利用率。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。
Claims (15)
- 一种图像渲染方法,所述方法由电子设备执行,所述方法包括:获取虚拟对象的第一纹理数据、以及与所述虚拟对象的第二纹理数据对应的转换参数,其中,所述第一纹理数据的数据量小于所述第二纹理数据的数据量,且所述第一纹理数据的图像信息范围小于所述第二纹理数据的图像信息范围;基于所述转换参数以及所述第一纹理数据,进行拟合渲染处理,得到包括所述虚拟对象的拟合渲染图;确定所述拟合渲染图与标准渲染图之间的渲染损失,并基于所述渲染损失更新所述转换参数以及所述第一纹理数据,其中,所述标准渲染图是基于所述第二纹理数据进行标准渲染处理得到的包括所述虚拟对象的渲染图;基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括所述虚拟对象的目标渲染图。
- 根据权利要求1所述的方法,其中,所述基于所述转换参数以及所述第一纹理数据,进行拟合渲染处理,得到包括所述虚拟对象的拟合渲染图,包括:基于所述转换参数,对所述虚拟对象的第一纹理数据进行朝向所述第二纹理数据的空间转换处理,得到第三纹理数据,其中,所述第三纹理数据与所述第二纹理数据处于相同动态范围空间;基于所述第三纹理数据,进行拟合渲染处理,得到包括所述虚拟对象的拟合渲染图。
- 根据权利要求2所述的方法,其中,所述第一纹理数据包括每个纹理像素的第一颜色值,所述第二纹理数据包括每个所述纹理像素的第二颜色值,所述转换参数包括第一一阶参数以及第二一阶参数;所述基于所述转换参数,对所述虚拟对象的第一纹理数据进行朝向所述第二纹理数据的空间转换处理,得到第三纹理数据,包括:针对所述第一纹理数据中的每个所述纹理像素执行以下处理:确定所述纹理像素的第一颜色值与所述第一一阶参数之间的第一乘积结果;确定所述第一乘积结果与所述第二一阶参数的第一求和结果;将所述纹理像素的第一颜色值与所述第一求和结果之间的比值,确定为所述纹理像素的与所述第二颜色值处于相同动态范围空间的第三颜色值;将多个所述纹理像素的第三颜色值,组成所述第三纹理数据。
- 根据权利要求2所述的方法,其中,所述第一纹理数据包括每个纹理像素的第一颜色值,所述第二纹理数据包括每个所述纹理像素的第二颜色值,所述转换参数包括第一二阶参数、第二二阶参数、第三二阶参数、第四二阶参数以及第五二阶参数;所述基于所述转换参数,对所述虚拟对象的第一纹理数据进行朝向所述第二纹理数据的空间转换处理,得到第三纹理数据,包括:针对所述第一纹理数据中的每个所述纹理像素执行以下处理:确定所述纹理像素的第一颜色值的平方与所述第一二阶参数之间的第二乘积结果、以及所述纹理像素的第一颜色值与所述第二二阶参数之间的第三乘积结果;对所述第二乘积结果、所述第三乘积结果以及所述第三二阶参数进行求和处理,得到第二求和结果;确定所述纹理像素的第一颜色值与所述第四二阶参数之间的第四乘积结果;对所述第二求和结果的平方根、所述第四乘积结果以及所述第三二阶参数的平方根进行求和处理,得到第三求和结果;确定所述纹理像素的第一颜色值与所述第五二阶参数的第四求和结果;将所述第三求和结果与所述第四求和结果之间的比值,确定为所述纹理像素的与所述第二颜色值处于相同动态范围空间的第三颜色值;将多个所述纹理像素的第三颜色值,组成所述第三纹理数据。
- 根据权利要求2所述的方法,其中,所述第三纹理数据包括每个纹理像素的第三颜色值;所述基于所述第三纹理数据,进行对应所述虚拟对象的拟合渲染处理,得到包括所述虚拟对象的拟合渲染图,包括:获取所述虚拟对象的二维纹理坐标;获取对应所述拟合渲染处理的可微分渲染框架;将所述二维纹理坐标以及每个所述纹理像素的第三颜色值在所述可微分渲染框架中进行正向传播,得到包括所述虚拟对象的拟合渲染图。
- 根据权利要求1所述的方法,其中,所述确定所述拟合渲染图与标准渲染图之间的渲染损失,包括:确定所述标准渲染图与所述拟合渲染图在屏幕空间的整体像素值差异;基于所述整体像素值差异、所述拟合渲染图的长度以及所述拟合渲染图的宽度,确定所述渲染损失。
- 根据权利要求6所述的方法,其中,所述确定所述标准渲染图与所述拟合渲染图在屏幕空间的整体像素值差异,包括:针对所述拟合渲染图以及所述标准渲染图在所述屏幕空间中任一相同像素执行以下处理:确定所述拟合渲染图中对应所述像素的第一像素值,并确定所述标准渲染图中对应所述像素的第二像素值;将所述第一像素值与所述第二像素值之间的差值的绝对值作为所述像素的像素值差异;对所述屏幕空间中多个所述像素的像素值差异进行求和处理,得到所述整体像素值差异。
- 根据权利要求1所述的方法,其中,所述基于所述渲染损失更新所述转换参数以及所述第一纹理数据,包括:基于所述渲染损失对所述第一纹理数据进行偏导处理,得到对应所述第一纹理数据的梯度;基于所述渲染损失对所述转换参数进行偏导处理,得到对应所述转换参数的梯度;基于对应所述第一纹理数据的梯度,更新所述第一纹理数据,并基于对应所述转换参数的梯度,更新所述转换参数。
- 根据权利要求8所述的方法,其中,所述基于对应所述第一纹理数据的梯度,更新所述第一纹理数据,包括:将设定学习率与对应所述第一纹理数据的梯度相乘,得到所述第一纹理数据的数据变化值;将所述第一纹理数据的数据变化值与所述第一纹理数据相加,得到更新的第一纹理数据;所述基于对应所述转换参数的梯度,更新所述转换参数,包括:将设定学习率与对应所述转换参数的梯度相乘,得到所述转换参数的数据变化值;将所述转换参数的数据变化值与所述转换参数相加,得到更新的转换参数。
- 根据权利要求1所述的方法,其中,所述基于更新的转换参数以及更新的第一纹理数据,对所述虚拟对象进行实时渲染处理,包括:当所述渲染损失小于损失阈值时,基于更新的转换参数以及更新的第一纹理数据,对所述虚拟对象进行实时渲染处理;或者当更新次数达到更新次数阈值时,基于更新的转换参数以及更新的第一纹理数据,对所述虚拟对象进行实时渲染处理。
- 根据权利要求1所述的方法,其中,所述基于更新的转换参数以及更新的第一纹理数据,对所述虚拟对象进行实时渲染处理,得到包括所述虚拟对象的目标渲染图,包括:基于所述更新的转换参数,对所述更新的第一纹理数据进行朝向所述第二纹理数据的空间转换处理,得到第四纹理数据,其中,所述第四纹理数据与所述第二纹理数据处于相同动态空间范围;确定所述虚拟对象的至少一个二维纹理坐标;针对每个所述二维纹理坐标执行以下处理:从所述第四纹理数据中采样出与所述二维纹理坐标对应的纹理图像,并对采样得到的纹理图像进行贴合处理;基于每个所述二维纹理坐标的贴合结果,生成包括所述虚拟对象的目标渲染图。
- 一种图像渲染装置,所述装置包括:获取模块,配置为获取虚拟对象的第一纹理数据、以及对应所述虚拟对象的第二纹理数据的转换参数,其中,所述第一纹理数据的数据量小于所述第二纹理数据的数据量,且所述第一纹理数据的图像信息范围小于所述第二纹理数据的图像信息范围;拟合模块,配置为基于所述转换参数以及所述第一纹理数据,进行拟合渲染处理,得到包括所述虚拟对象的拟合渲染图;损失模块,配置为确定所述拟合渲染图与标准渲染图之间的渲染损失,并基于所述渲染损失更新所述转换参数以及所述第一纹理数据,其中,所述标准渲染图是基于所述第二纹理数据进行标准渲染处理得到的包括所述虚拟对象的渲染图;渲染模块,配置为基于更新的转换参数以及更新的第一纹理数据,进行实时渲染处理,得到包括所述虚拟对象的目标渲染图。
- 一种电子设备,所述电子设备包括:存储器,用于存储计算机可执行指令;处理器,用于执行所述存储器中存储的计算机可执行指令时,实现权利要求1至11任一项所述的图像渲染方法。
- 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现权利要求1至11任一项所述的图像渲染方法。
- 一种计算机程序产品,包括计算机程序或计算机可执行指令,所述计算机程序或计算机可执行指令被处理器执行时实现权利要求1至11任一项所述的图像渲染方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22929625.6A EP4394715A1 (en) | 2022-03-03 | 2022-12-02 | Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product |
US18/369,721 US20240005588A1 (en) | 2022-03-03 | 2023-09-18 | Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210202833.0A CN116740256A (zh) | 2022-03-03 | 2022-03-03 | 图像渲染方法、装置、电子设备、存储介质及程序产品 |
CN202210202833.0 | 2022-03-03 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/369,721 Continuation US20240005588A1 (en) | 2022-03-03 | 2023-09-18 | Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023165198A1 true WO2023165198A1 (zh) | 2023-09-07 |
Family
ID=87882958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/136193 WO2023165198A1 (zh) | 2022-03-03 | 2022-12-02 | 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240005588A1 (zh) |
EP (1) | EP4394715A1 (zh) |
CN (1) | CN116740256A (zh) |
WO (1) | WO2023165198A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7932914B1 (en) * | 2005-10-20 | 2011-04-26 | Nvidia Corporation | Storing high dynamic range data in a low dynamic range format |
CN102696220A (zh) * | 2009-10-08 | 2012-09-26 | 国际商业机器公司 | 将数字图像从低动态范围图像转换为高动态范围图像的方法与系统 |
CN106033617A (zh) * | 2015-03-16 | 2016-10-19 | 广州四三九九信息科技有限公司 | 一种结合可视化工具进行游戏图片智能压缩的方法 |
US20200151509A1 (en) * | 2018-11-12 | 2020-05-14 | Adobe Inc. | Learning to estimate high-dynamic range outdoor lighting parameters |
CN113963110A (zh) * | 2021-10-11 | 2022-01-21 | 北京百度网讯科技有限公司 | 纹理图生成方法、装置、电子设备及存储介质 |
CN114067042A (zh) * | 2021-11-08 | 2022-02-18 | 腾讯科技(深圳)有限公司 | 一种图像渲染方法、装置、设备、存储介质及程序产品 |
-
2022
- 2022-03-03 CN CN202210202833.0A patent/CN116740256A/zh active Pending
- 2022-12-02 EP EP22929625.6A patent/EP4394715A1/en active Pending
- 2022-12-02 WO PCT/CN2022/136193 patent/WO2023165198A1/zh active Application Filing
-
2023
- 2023-09-18 US US18/369,721 patent/US20240005588A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7932914B1 (en) * | 2005-10-20 | 2011-04-26 | Nvidia Corporation | Storing high dynamic range data in a low dynamic range format |
CN102696220A (zh) * | 2009-10-08 | 2012-09-26 | 国际商业机器公司 | 将数字图像从低动态范围图像转换为高动态范围图像的方法与系统 |
CN106033617A (zh) * | 2015-03-16 | 2016-10-19 | 广州四三九九信息科技有限公司 | 一种结合可视化工具进行游戏图片智能压缩的方法 |
US20200151509A1 (en) * | 2018-11-12 | 2020-05-14 | Adobe Inc. | Learning to estimate high-dynamic range outdoor lighting parameters |
CN113963110A (zh) * | 2021-10-11 | 2022-01-21 | 北京百度网讯科技有限公司 | 纹理图生成方法、装置、电子设备及存储介质 |
CN114067042A (zh) * | 2021-11-08 | 2022-02-18 | 腾讯科技(深圳)有限公司 | 一种图像渲染方法、装置、设备、存储介质及程序产品 |
Also Published As
Publication number | Publication date |
---|---|
CN116740256A (zh) | 2023-09-12 |
US20240005588A1 (en) | 2024-01-04 |
EP4394715A1 (en) | 2024-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023160054A1 (zh) | 一种图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 | |
US9928637B1 (en) | Managing rendering targets for graphics processing units | |
US11711563B2 (en) | Methods and systems for graphics rendering assistance by a multi-access server | |
US11615575B2 (en) | Methods and systems for constructing a shader | |
CN114067042A (zh) | 一种图像渲染方法、装置、设备、存储介质及程序产品 | |
CN111784817B (zh) | 阴影的展示方法和装置、存储介质、电子装置 | |
CN115082607B (zh) | 虚拟角色头发渲染方法、装置、电子设备和存储介质 | |
CN114359458A (zh) | 一种图像渲染方法、装置、设备、存储介质及程序产品 | |
CN117101127A (zh) | 虚拟场景中的图像渲染方法、装置、电子设备及存储介质 | |
CN115228083A (zh) | 一种资源渲染方法及装置 | |
WO2023202254A1 (zh) | 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 | |
KR100632535B1 (ko) | 이동통신단말기용 삼차원 그래픽 엔진 및 그래픽 제공 방법 | |
WO2023165198A1 (zh) | 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 | |
CN114399580A (zh) | 一种图像渲染方法、装置、设备、存储介质及程序产品 | |
CN115937389A (zh) | 阴影渲染方法、装置、存储介质和电子设备 | |
CN115970275A (zh) | 虚拟对象的投影处理方法、装置、存储介质与电子设备 | |
US7710419B2 (en) | Program, information storage medium, and image generation system | |
US7724255B2 (en) | Program, information storage medium, and image generation system | |
CN116152405B (zh) | 一种业务处理方法、装置及计算机设备、存储介质 | |
WO2023142756A1 (zh) | 直播互动方法、装置以及系统 | |
CN117078824A (zh) | 参数拟合方法、装置、设备、存储介质及程序产品 | |
CN118615703A (zh) | 虚拟场景的渲染方法、装置、设备、存储介质及程序产品 | |
JP2006277488A (ja) | プログラム、情報記憶媒体及び画像生成システム | |
CN118537475A (zh) | 图像渲染方法、装置、电子设备、存储介质及程序产品 | |
CN118384493A (zh) | 场景画面处理方法、装置、存储介质及电子装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22929625 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022929625 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022929625 Country of ref document: EP Effective date: 20240329 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202402798P Country of ref document: SG |