CN116843736A - Scene rendering method and device, computing device, storage medium and program product - Google Patents

Scene rendering method and device, computing device, storage medium and program product Download PDF

Info

Publication number
CN116843736A
CN116843736A CN202210301604.4A CN202210301604A CN116843736A CN 116843736 A CN116843736 A CN 116843736A CN 202210301604 A CN202210301604 A CN 202210301604A CN 116843736 A CN116843736 A CN 116843736A
Authority
CN
China
Prior art keywords
texture
color
texture block
sampling
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210301604.4A
Other languages
Chinese (zh)
Inventor
梁跃
王学强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210301604.4A priority Critical patent/CN116843736A/en
Publication of CN116843736A publication Critical patent/CN116843736A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure relates to a scene rendering method. The method comprises the following steps: obtaining a target virtual texture corresponding to a target scene; dividing a target virtual texture into a plurality of texture blocks, wherein each texture block comprises a plurality of pixels; acquiring a sampling color of each texture block based on colors of partial pixels in a plurality of pixels of the texture block; determining a color reference value corresponding to each texture block and a color correction value set corresponding to the texture block based on the sampling color of each texture block in the plurality of texture blocks; generating a target physical texture corresponding to the target virtual texture based on the set of color reference values and color correction values corresponding to each of the plurality of texture blocks; rendering the target scene based on the target physical texture. Therefore, the texture sampling times can be reduced, the data transmission quantity is reduced, and the scene rendering efficiency is improved.

Description

Scene rendering method and device, computing device, storage medium and program product
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a scene rendering method, a scene rendering apparatus, a computing device, a computer readable storage medium, and a computer program product.
Background
In the field of computer technology, texture generally refers to the relief grooves of an object surface as well as to a color pattern on a smooth surface. In rendering of three-dimensional (3 d), virtual Reality (VR), augmented Reality (AR, augmented Reality), and other scenes, processing of texture data is generally involved. In general, texture data may be stored on an internal or external storage of the apparatus and loaded into a graphics processing apparatus such as a GPU (graphics processor, graphics Processing Unit) as needed. However, in the case of real-time changes in the rendered scene, it is often necessary to process large amounts of texture data quickly, which can present challenges to the data transmission bandwidth and real-time processing performance of the device.
Disclosure of Invention
In view of the above, the present disclosure provides a scene rendering method, a scene rendering device, a computing apparatus, a computer-readable storage medium, and a computer program product, which may alleviate, mitigate, or even eliminate the above-mentioned problems.
According to an aspect of the present disclosure, there is provided a scene rendering method including: obtaining a target virtual texture corresponding to a target scene; dividing a target virtual texture into a plurality of texture blocks, wherein each texture block comprises a plurality of pixels; acquiring a sampling color of each texture block based on colors of partial pixels in the plurality of pixels of the texture block; determining a color reference value corresponding to each texture block and a color correction value set corresponding to the texture block based on the sampling color of each texture block in the plurality of texture blocks, wherein the color correction value set corresponding to the texture block comprises color correction values corresponding to each pixel in the texture block; generating a target physical texture corresponding to the target virtual texture based on the set of color reference values and color correction values corresponding to each of the plurality of texture blocks; rendering the target scene based on the target physical texture.
In some embodiments, determining a color reference value corresponding to each texture block of the plurality of texture blocks and a set of color correction values corresponding to the texture block based on a sampling color of the texture block comprises: determining a color distribution endpoint value corresponding to each texture block based on a sampling color of the texture block; determining a color reference value corresponding to each texture block based on the color distribution endpoint value corresponding to the texture block; based on the color reference value of each texture block, a color correction value corresponding to each pixel in the texture block is determined.
In some embodiments, determining color correction values corresponding to individual pixels in each texture block based on the color reference value for the texture block comprises: determining a color correction value corresponding to the partial pixels in each texture block based on the color reference value and the sampling color of the texture block; color correction values corresponding to respective pixels in the texture block are determined by interpolating color correction values corresponding to a portion of the pixels in the texture block.
In some embodiments, determining a color correction value corresponding to a portion of the pixels in each texture block based on the color reference value and the sampling color of the texture block comprises: determining the weight of the color reference value in each sampling color based on the color reference value and the sampling color of each texture block; a color correction value corresponding to each of the partial pixels in the texture block is determined based on the weight of the color reference value in each of the sampling colors.
In some embodiments, determining a color correction value corresponding to each of the partial pixels in the texture block based on the weight of the color reference value in each of the sampled colors comprises: for each sampling color, approximately quantizing the weight of the color reference value in the sampling color to one preset weight value of a plurality of preset weight values as a color correction value of a pixel corresponding to the sampling color.
In some embodiments, the sampling color is a multi-channel color, and wherein determining, based on the sampling color of each texture block of the plurality of texture blocks, a color distribution endpoint value for the texture block comprises: for each color channel of the sampling color, determining a color mean value corresponding to the color channel; determining the comprehensive deviation degree of the sampling color relative to the color mean value of each color channel; determining an axis corresponding to the color channel with the highest comprehensive deviation degree as a main shaft; two end points of the projection of the sampled color on the principal axis are determined as color distribution end points.
In some embodiments, determining the color reference value corresponding to each texture block based on the color distribution endpoint value corresponding to the texture block comprises: the lossless quantization value of the color distribution endpoint value corresponding to each texture block is determined as the color reference value corresponding to the texture block.
In some embodiments, each texture block of the plurality of texture blocks is a rectangular texture block, and wherein obtaining the sample color for each texture block of the plurality of texture blocks based on the color of a portion of the pixels of the texture block comprises: for each rectangular texture block, four sampling colors are acquired based on the colors of pixels where four vertexes in the rectangular texture block are located.
In some embodiments, dividing the target virtual texture into a plurality of texture blocks comprises: dividing the target virtual texture into a plurality of texture blocks with preset sizes.
In some embodiments, the preset size is 4×4 pixels.
In some embodiments, the color reference values include low dynamic range color values of a red channel, a green channel, a blue channel, and a transparency channel.
In some embodiments, obtaining a target virtual texture corresponding to a target scene includes: sampling the virtual texture model based on the position and orientation of the virtual camera corresponding to the target scene; a target virtual texture is generated based on the sampling result of the virtual texture model.
In some embodiments, generating a target physical texture corresponding to the target virtual texture based on the set of color reference values and color correction values corresponding to each of the plurality of texture blocks comprises: and determining the scaling of the target physical texture according to the position and the orientation of the virtual camera.
According to another aspect of the present disclosure, there is provided a scene rendering apparatus including: the first acquisition module is configured to acquire a target virtual texture corresponding to a target scene; a partitioning module configured to partition a target virtual texture into a plurality of texture blocks, wherein each texture block includes a plurality of pixels; a second acquisition module configured to acquire a sampling color of each of the texture blocks based on a color of a part of pixels in a plurality of pixels of the texture block; a determining module configured to determine, based on a sampling color of each of a plurality of texture blocks, a color reference value corresponding to the texture block and a set of color correction values corresponding to the texture block, wherein the set of color correction values corresponding to the texture block includes color correction values corresponding to respective pixels in the texture block; a generation module configured to generate a target physical texture corresponding to the target virtual texture based on a set of color reference values and color correction values corresponding to each of the plurality of texture blocks; and a rendering module configured to render the target scene based on the target physical texture.
According to yet another aspect of the present disclosure, there is provided a computing device comprising: a memory configured to store computer-executable instructions; a processor configured to perform the method provided according to the previous aspect when the computer executable instructions are executed by the processor.
According to a further aspect of the present disclosure, there is provided a computer readable storage medium storing computer executable instructions which, when executed, perform the method provided according to the preceding aspect.
According to a further aspect of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the steps of the method provided according to the preceding aspects.
In the scene rendering method and apparatus according to some embodiments of the present disclosure, a target virtual texture corresponding to a target scene may be divided into a plurality of texture blocks, a color reference value and a color correction value set corresponding to each texture block are determined based on colors of partial pixels in the texture block, and a target physical texture is generated according to the determined color reference value and color correction value set of each texture block, for rendering the target scene. Therefore, the target virtual texture is segmented, and the colors of partial pixels are sampled in each texture block to generate the final target physical texture, so that the reading times of texture data can be remarkably reduced, the times and the calculated amount of sampling processing are greatly reduced, and the required data transmission amount is reduced; therefore, the device power consumption caused by the requirement of data transmission bandwidth and data processing of the device (especially the mobile terminal) can be reduced, the texture processing efficiency is improved, and the scene rendering efficiency is further improved. Meanwhile, taking the similarity of pixel colors in texture blocks into consideration, sampling partial pixel colors in each texture block can achieve significant improvement of texture processing efficiency and scene rendering efficiency under the condition that the required target physical texture quality is ensured and the scene rendering effect is not reduced (even improved), so that the scene rendering method according to some embodiments of the present disclosure can be applied to more (middle-low-end) devices (such as middle-low-end mobile platforms), and high-quality scene rendering can be achieved on more devices.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 schematically illustrates an example scenario in which the technical solution provided by embodiments of the present disclosure may be applied;
FIG. 2 schematically shows a comparative example of the effect of whether or not to turn on a virtual texture;
FIG. 3 schematically illustrates an example interface diagram in the related art;
FIGS. 4A-4C schematically illustrate examples of experimental results when scene rendering is performed using a texture compression scheme in the related art;
FIG. 5 schematically illustrates an example flow chart of a scene rendering method provided in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates an example flow chart of a method of acquiring a target physical texture provided in accordance with an embodiment of the present disclosure;
FIG. 7 illustrates a schematic diagram of a sampling color acquisition scheme according to an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of a target physical texture scaling scheme according to an embodiment of the present disclosure;
FIG. 9 schematically illustrates an example flow diagram of a target physical texture generation process according to this disclosure;
FIGS. 10A-10B schematically illustrate examples of experimental results when scene rendering using schemes provided by some embodiments of the present disclosure;
FIG. 11 schematically illustrates a comparative experimental example of scene rendering using the scheme provided by the related art and some embodiments of the present disclosure;
12A-12B schematically illustrate experimental examples of scene rendering schemes provided in accordance with some embodiments of the present disclosure;
FIG. 13 schematically illustrates an example interface diagram provided in accordance with an embodiment of the present disclosure;
fig. 14 schematically illustrates an exemplary block diagram of a scene rendering device provided according to an embodiment of the present disclosure;
fig. 15 schematically illustrates an exemplary block diagram of a computing device provided in accordance with an embodiment of the present disclosure.
Detailed Description
Before describing in detail embodiments of the present disclosure, some related concepts will be explained first.
Texture: in the field of computer technology, texture includes both texture of the surface of an object in a general sense, even if the surface of the object exhibits rugged grooves, and also includes a color pattern (commonly referred to as a tread) on the smooth surface of the object. For the pattern, a color pattern or a pattern is drawn on the surface of an object; in the case of grooves, a colored pattern or design is actually drawn on the surface, and a visual uneven feeling is required. Thus, in the field of computer technology, patterns and grooves are produced by the same or similar methods, typically by mapping one or more two-dimensional patterns onto the surface of an object in a specific manner, a process which may be referred to as texture mapping.
Virtual texture: the concept of virtual texture is derived from virtual memory. Similar to virtual memory, virtual texture techniques cut a texture image into many pages, dynamically load the texture image pages required for the current scene according to actual needs, and update in real time the mapping relationship between virtual texture addresses and the addresses of textures present in memory (e.g., store in the form of a Page Table). The texture in memory may be referred to herein as physical texture. As the field of view changes, a portion of the physical texture may be replaced and a portion of the physical texture may be loaded. The virtual texture technique may achieve a more efficient texture transfer rate and reduce consumption of memory space of the graphics processing apparatus compared to conventional texture techniques. In particular, in the conventional streaming texture technique, when a certain portion of texture is required, it is generally required to transmit the entire texture including the texture, for example, a corresponding mip level (mip is an abbreviation of a number in parvo, meaning "a majority in a small space"; versions of different resolutions may be provided for the same texture, which may be understood as corresponding to different mip levels), whereas in the virtual texture technique, the texture may be divided into smaller blocks in advance, and objects in a target scene are analyzed first when the texture is transmitted, and only a visible portion of the texture is transmitted, thereby enabling to maximally reduce the amount of transmitted data and save more storage space. Alternatively, in some application scenarios, virtual texture techniques may also be used in combination with traditional texture techniques.
Texture compression: texture is in two forms in the video memory, namely compressed and uncompressed. The compressed texture can effectively reduce the occupation of the texture to the video memory, and meanwhile, the GPU samples special hardware for texture compression, so that the bandwidth consumption and the performance are all optimal. In general, in order to further reduce data transmission efficiency and save storage space, in application fields such as games, a compressed texture technique is generally used. Currently, ETC1/2, ASTC, PVR and the like are common texture compression formats.
Game engine: refers to the core components of some compiled editable computer game systems or some interactive real-time image applications. These systems provide game designers with the various tools required to write games in order to enable the game designer to more easily and quickly make game programs without starting from zero. Common game engines include UE4 (universal Engine 4), unity Engine, etc.
RVT: the full scale Runtime Virtual Texture (runtime virtual texture), sometimes also referred to as PVT (Procedural Virtual Texture, process virtual texture). In general, in an application scenario such as game development, it can be divided into editing period and running period. Editing period may be understood as editing various resource settings, logic, etc. using an editor, and run-time refers to the period of use by a user, i.e., the process by which the user obtains the relevant application for use. The runtime virtual texture may dynamically update the virtual texture during application runtime and may be used to represent a large amount of rich texture detail.
AVT: full scale Adaptive Virtual Texture (adaptive virtual texture). Adaptive virtual texture refers to dynamically adjusting the texture resolution of a virtual texture Page (Page) through an indirect indexing mechanism on the basis of RVTs. Which helps to improve the resolution of the close-range texture and increase the detail presentation.
ETC: full scale Ericsson Texture Compression (Ericsson texture compression), an open standard supported by Khronos, is widely used in mobile platforms. It is a lossy compression algorithm designed for perceived quality. ETC2 adds transparent channel support on the basis of ETC 1.
ASTC: full scale Adaptive Scalable Texture Compress (adaptive scalable texture compression), which is a block-based texture compression algorithm developed by ARM, is essentially a heuristic algorithm that selects the most appropriate compression algorithm and parameters for each texture block to achieve the best compression quality.
Fig. 1 schematically illustrates an example application scenario 100 in which the technical solution provided by the embodiments of the present disclosure may be applied.
As shown in fig. 1, application scenario 100 includes a computing device 110. The scene rendering schemes provided by embodiments of the present disclosure may be deployed at computing device 110. Computing device 110 may include, but is not limited to, a cell phone, desktop computer, notebook computer, tablet computer, VR or AR device, smart home appliance, vehicle terminal, and the like. The scene rendering scheme provided by the embodiments of the present disclosure may be installed on the computing device 110 as a stand-alone application, or may be an application service deployed on a cloud server, where the computing device 110 may be deployed with an application to access the application service. Alternatively, the scene rendering schemes provided by embodiments of the present disclosure may be provided as part of other applications, such as game-like applications, VR or AR presentation-like applications, etc., for rendering desired scenes in such applications.
Illustratively, the user 120 may use the scene rendering scheme provided by embodiments of the present disclosure through the computing device 110. For example, the user 120 may view rendered scenes, adjust scenes of desired positions and/or angles, etc., through a user interface provided by the computing device 110.
In some embodiments, the application scenario 100 may also include a server 130. Optionally, the scene rendering scheme provided by the embodiments of the present disclosure may also be deployed on the server 130. Alternatively, the scene rendering scheme provided by embodiments of the present disclosure may also be deployed on a combination of computing device 110 and server 130. The present disclosure is not particularly limited in this respect. For example, user 120 may access server 130 via network 150 through computing device 110 in order to obtain services provided by server 130.
The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like. Moreover, it should be understood that server 130 is shown by way of example only, and that in fact other devices or combinations of devices having computing power and storage capabilities may alternatively or additionally be used to provide corresponding services.
In some embodiments, the application scenario 100 may also include a database 140. Computing device 110 and/or server 130 may be linked with database 140 via network 150, for example, to obtain scene models, texture data, etc. from database 140. The database 140 may be a stand-alone data storage device or group of devices, or may be a back-end data storage device or group of devices associated with other online services, for example.
In addition, in the present disclosure, the network 150 may be a wired network connected via a cable, an optical fiber, or the like, or may be a wireless network such as 2G, 3G, 4G, 5G, wi-Fi, bluetooth, zigBee, li-Fi, or the like, or may be an internal connection line of one or several devices, or the like.
Fig. 2 schematically shows an effect comparison example diagram 200 of whether to turn on virtual textures. The left side of the graph 200 is an example effect graph of open virtual textures, as indicated by reference numeral 210, and the right side of the graph 200 is an example effect graph of unopened virtual textures, as indicated by reference numeral 220. As can be seen from the figure, in the right region indicated by 220 (virtual texture is not turned on), the surface of each object has a basic color, but lacks texture features, and the visual effect is not ideal; in the left area indicated by 210 (virtual texture is opened), the surfaces of the objects are covered with corresponding textures, so that different roughness, high luminosity, concave-convex characteristics and the like are presented, the details of the presented scene are richer, more realistic and better visual effects are provided. Specifically, in the left region indicated at 210, rut marks are present on the road surface, trees, grass represent corresponding vegetation texture, the ground next to the road represents land or stone ground texture, etc., as compared to the right region indicated at 220, which texture details all contribute to a more visually effective scene.
Fig. 3 schematically illustrates an example interface 300 in the related art. The interface 300 illustrates a setup interface for a runtime virtual texture. As shown, the interface 300 shows four types of settings, namely, "detail," size, "" layout, "" hierarchical detail. Specifically, "detail" includes virtual texture size, page table size; the "size" includes the virtual texture size (in slices), the size of each virtual texture slice, the edge fill of each virtual texture slice; "layout" includes virtual texture content, whether BC (Block Compression ) texture compression is enabled; the "level of detail" includes the option of texture groups. Alternatively or additionally, the interface 300 may present other types of settings content, and the various types of settings content may include other aspects of settings.
In the related art, the above-mentioned ETC algorithm is generally used in the texture compression step for a mobile platform. More specifically, the ETC algorithm is a block compression-based algorithm. For the RGB (red green blue) channel, RGB data of a 4x4 size pixel block can be compressed into a 64 bit data block. First, 4×4 pixels are divided horizontally or vertically into 2 groups of 4×2, and then, for each group, one 12-bit base color, 1-bit luminance index and 8-bit pixel index can be determined. Thus, the color of each pixel may be equal to the base color plus the luminance deviation pointed to by the combination of the luminance index and the pixel index. Illustratively, table 1 shows a luminance index versus luminance deviation table, which may include 16 columns, where only the first 8 columns are shown, with the last 8 columns being twice the value of the first 8 columns. When determining the pointed luminance deviation according to the combination of the luminance index and the pixel index, a certain column in table 1 may be selected according to the luminance index, and then the luminance deviation value at a certain row in the column may be selected according to the pixel index, and the luminance deviation value may be then added to the basic color to obtain the color of the corresponding pixel.
In addition, support for Alpha channels (transparency channels, or a channels for short) is added to ETC2, and specifically, it can compress Alpha data of a 4x4 pixel block into a 64-bit data block alone.
However, applicants have found that when employing the ETC compression algorithm described above, 16 texture reads are required for each 4x4 pixel block, and that when setting different virtual texture content, as indicated by reference numeral 310 in FIG. 3, one texture compression call may require compression of 1-3 unequal textures, which will further increase the amount of data read. For mobile platforms, which are typically extremely sensitive to data transmission bandwidth, high bandwidth requirements can result in low performance and can result in increased power consumption. 4A-4B, experimental results of scene rendering using the ETC texture compression scheme are schematically shown.
In fig. 4A, the left side is an uncompressed texture 410, the right side is a texture 414 restored after compression using an ETC compression algorithm (hereinafter simply referred to as an ETC compressed texture 414), and the middle is a difference texture 412 of both. As can be seen in fig. 4A, there is a difference between the ETC compressed texture 414 and the uncompressed texture 410. To more clearly show the texture difference before and after compression, the exemplary interface 400B in fig. 4B presents a series of parameter values in which the maximum difference, average error, standard deviation, root mean square error, and peak signal to noise ratio for each color channel and the aggregate of all channels are presented. In general, smaller maximum difference, average error, standard deviation, root mean square error, indicate better compression quality, and larger peak signal to noise ratio, indicate better compression quality. As explained before, the principle of the ETC algorithm determines that its compression quality is limited because it simulates the primary color by superimposing an offset on the base color, wherein the offset is looked up by a luminance deviation table, and the offset in the luminance deviation table is determined in advance and is independent of the compressed texture. Therefore, when the offset in the luminance deviation table cannot express the difference between the primary color and the basic color, a significant error occurs, which results in a reduction in compression quality.
Furthermore, as mentioned previously, for ETC compression schemes, one texture compression call may need to compress 1-3 unequal textures, which would increase the amount of data that needs to be read. Fig. 4C illustrates exemplary performance changes of a mobile device when running conventional ETC texture compression. It should be appreciated that while the experimental results of fig. 4C are chip-specific, this is merely exemplary and similar results may be observed when experiments are performed using other chips. As shown in fig. 4C, the change condition of each index related to the GPU with time is presented, where the index includes: GPU Temperature (GPU Temperature), clock (Clock), busyness (Bus Busy), read Total (Read Total), shader busyness (Shaders Busy), shader ALU capacity usage (Shader ALU Capacity Utilized, ALU is an Arithmetic and logic Unit), stall on system memory (Stalled on System Memory), texture fetch stall (Texture Fetch Stall), and so on. As shown, the time region indicated by reference numeral 430 is the various index curves of the placement phase (i.e., when no ETC texture compression is running), and the time region indicated by reference numeral 432 is the various index curves when ETC texture compression is running (ETC 2 texture compression). Therefore, when ETC texture compression is operated, various indexes of the GPU slide down. For example, the GPU busyness is significantly reduced, and so on. Through experimentation and analysis, applicants have found that this phenomenon is due to the large amount of texture data read resulting in a bandwidth bottleneck such that no more data can be provided to the GPU for processing, and therefore the GPU cannot operate at full speed, resulting in the occurrence of a downslide in its various indices. At this point, if the user is viewing the rendered scene on a mobile device that includes the chip, he may experience a click, which may adversely affect the user's viewing experience.
Based on the above analysis, the applicant has proposed a new scene rendering method which helps to solve or alleviate the above problems. Fig. 5 schematically illustrates an example flowchart of a scene rendering method 500 provided in accordance with an embodiment of the disclosure. The scene rendering method 500 may be deployed on the computing device 110 in the scene 100 shown in fig. 1, or may be disposed on the server 130 or a combination of the computing device 110 and the server 130, for example.
Specifically, in step 510, a target virtual texture corresponding to a target scene may be acquired. For example, the target scene may refer to a scene that the user is watching, or may also refer to a scene that the user intends to show; the target virtual texture may refer to a virtual texture corresponding to a surface or a portion of a surface in the target scene, which may be pre-stored, or may be generated during the running of the application. Alternatively, one or more target virtual textures may be acquired simultaneously.
For example, the virtual texture may be pre-generated by a developer of the application. Which may be stored in the storage of the computing device along with the installation files of the application for use by the graphics processing apparatus; or may be stored on the server side and transmitted to the corresponding computing device when needed. Alternatively, the virtual texture may be embodied as one or several texture maps, each of which may be partitioned into a number of texture pages, each of which may have location information that may reflect the coordinate locations of the texture pages on the model. When the position or orientation of the virtual camera corresponding to the user field of view changes, or when the scene itself changes, the corresponding texture page can be directly obtained from the texture map according to the coordinate positions of the current scene, and used as the target virtual texture. Alternatively, since in a view scene such as three-dimensional, the near object is relatively larger and the far object is relatively smaller, for each texture map, texture maps corresponding thereto with different zoom levels may be stored in advance for acquiring a corresponding texture page in the texture map with an appropriate zoom level according to the distance, angle, etc. of the object in the target scene from the current viewpoint. Alternatively, the scaling process may be performed on the acquired texture page during the application program running.
Alternatively, the virtual texture may be generated automatically during application execution, for example. For example, when the position or orientation of the virtual camera corresponding to the user's field of view changes, or when the scene itself changes, a corresponding target virtual texture may be generated by photographing (typically vertically photographing) a surface in the target scene using the camera.
At step 520, the target virtual texture may be divided into a plurality of texture blocks, wherein each texture block may include a plurality of pixels. The target virtual texture may be divided into a preset number of texture blocks, or may be divided into an adaptively determined number of texture blocks, for example; the texture blocks may be equal in size or may not be exactly equal. Here, the size of the texture block may be measured in terms of the number of pixels, and the shape of each texture block may be square, or may be another shape such as a rectangle or triangle. These can be set according to specific requirements.
At step 530, the sampling color of each texture block may be obtained based on the color of a portion of the pixels in the plurality of pixels of the texture block. As mentioned in step 520, each texture block may include a plurality of pixels, and the color of a portion of the pixels of one texture block may be sampled as the sampled color of that texture block. Alternatively, the color obtained from the partial pixel samples of the texture block may be further processed, such as color space conversion, bit expansion, quantization processing, etc., and the processed result may be used as the sampling color of the texture block.
In step 540, a color reference value corresponding to each texture block of the plurality of texture blocks and a set of color correction values corresponding to the texture block may be determined based on the sampled color of the texture block, wherein the set of color correction values corresponding to the texture block may include color correction values corresponding to individual pixels in the texture block. Alternatively, there may be one, two, or more color reference values for one texture block, and the number of color correction values may correspond to the number of pixels contained in the texture block, i.e., the color correction values may correspond one-to-one to the pixels in the texture block. For example, for a texture block, the color reference value may be determined based on one or more of a mean, a maximum, a minimum, a median, etc. of the sampled colors of the texture block, and the color correction value may characterize the difference of all or part of the colors in the texture block from the color reference value. For example, the color reference value and the color correction value may be determined according to the determination methods of the color base value and the index value (including the brightness index and the pixel index) described above with respect to the ETC algorithm, respectively, or may be determined according to other schemes. Thereby, the color of the pixel in the texture block is allowed to be restored based on the color reference value and the color correction value corresponding to the texture block. It will be appreciated that depending on the determination scheme of the color reference value and the color correction value, there may be an error in the restored pixel color from the original pixel color.
In step 550, a target physical texture corresponding to the target virtual texture may be generated based on the set of color reference values and color correction values corresponding to each of the plurality of texture blocks. For example, the color information of the texture blocks may be compressed according to the determined color reference value and color correction value, and each texture block of the compressed target virtual texture may be stored in a memory such as a GPU as a corresponding target physical texture. Alternatively, more processing such as interpolation, encoding, etc. may be performed on the color reference value and the color correction value corresponding to each texture block. And, optionally, a zoom level or the like may be determined for the target physical texture.
In step 560, the target scene may be rendered based on the target physical texture. The rendering of the target scene may be achieved, for example, by rendering the target physical texture at a corresponding surface in the target scene. For example, a mapping table (e.g., a page table) may be maintained that reflects a mapping relationship between virtual texture addresses and physical texture addresses, and when an object surface in a target scene needs to be rendered, a storage address of a corresponding target physical texture may be searched for by searching the mapping table according to corresponding coordinate position data, and if found, the storage address may be accessed and the corresponding target physical texture may be used to render the corresponding object surface; if not found, a corresponding physical texture may be generated according to steps 510-550 and the information in the mapping table may be updated accordingly.
By the scene rendering method 500, the target virtual texture can be segmented, and the colors of partial pixels can be sampled in each texture block to generate the final target physical texture, which helps to reduce the number of times of reading texture data, reduce the number of times of sampling processing, and reduce the required data transmission amount. Therefore, the demand for the data transmission bandwidth of the equipment can be reduced, and the texture processing efficiency is improved. Meanwhile, in consideration of the similarity of pixel colors in blocks, by sampling partial pixel colors in each texture block, the above-described lifting effect can be achieved without significantly reducing the target physical texture quality.
In some embodiments, step 510 may include: sampling the virtual texture model based on the position and orientation of the virtual camera corresponding to the target scene; a target virtual texture is generated based on the sampling result of the virtual texture model. The virtual camera can be positioned at the position and the orientation of the camera corresponding to the view angle of the user, and the shot scene is the scene watched by the user. Thus, the change in the user's perspective can be achieved by changing the position and orientation of the virtual camera. When the position and/or orientation of the virtual camera changes, a target scene may be determined based on the photographed field of view of the virtual camera, and thus the surface of the virtual texture model may be sampled accordingly and a target virtual texture may be generated based on the sampling result. The virtual texture model may be, for example, a large scene model created in advance by a developer. The generation of the target virtual texture by automatically sampling the virtual texture model during application execution helps reduce the amount of texture data that needs to be pre-stored.
Illustratively, the target virtual texture may be generated and the target physical texture derived according to a flowchart as shown in FIG. 6. Specifically, as shown in fig. 6, in step 610, a Page that needs to be updated may be acquired. As mentioned previously, the physical texture may be updated as the field of view changes (replacing old physical texture that is no longer needed and/or loading new physical texture that is needed). For example, the target scene area to be updated may be acquired when the position and/or orientation of the virtual camera changes (or when the scene in the current field of view changes), which may be referred to as a Page, i.e. the target scene area to be updated may be embodied as a plurality of pages, which may be, for example, a square with a preset size, or may be an area with another shape. At step 620, model textures on the Page can be rendered onto render texture. In particular, the individual Page areas on the virtual texture model may be photographed vertically using a camera, i.e. the sampling of the virtual texture model may be done by camera vertical photographing. Then, the photographing result of the camera may be rendered onto the renderTexture, thereby generating a target virtual texture to be processed. renderTexture can be understood as a map storing rendering output, and an object photographed by a camera can be finally output to the renderTexture through rendering. It is understood that a plurality of target virtual textures corresponding to different pages may be included on the renderTexture. Subsequently, at step 630, the renderTexture may be run-time texture compressed, i.e., the target virtual texture that it includes is compressed, to obtain the corresponding target physical texture. This step may be performed according to the steps 520 to 550 described previously, which will also be explained in further detail below. In step 640, the compressed RenderTexture may be updated to a physical texture, and used. Similarly, it is understood that a physical texture may include multiple target physical textures corresponding to different target virtual textures (i.e., to different pages).
In some embodiments, step 520 may include: the target virtual texture is divided into a plurality of texture blocks having a preset size, wherein each texture block may include a plurality of pixels as described above. By dividing the target virtual texture into texture blocks with preset sizes, the time consumed by deciding how many texture blocks the target virtual texture should be divided into can be avoided, so that the dividing speed can be improved, and the scene rendering efficiency can be improved. In addition, when dividing the target virtual texture into a plurality of texture blocks having a preset size, the texture block division operation may be directly performed without dividing the target virtual texture into two or more sub-target virtual textures (steps 520 to 540 may then be performed for each sub-target virtual texture, respectively), whereby more division time and more subsequent processing time due to division of the sub-target virtual textures may be avoided. In some embodiments, the preset size may be preset to 4x4 pixels, for example preset by a developer or user, or automatically preset by an application through testing or learning activities. Furthermore, it should be understood that the preset size may be set to other values according to actual needs. By presetting the preset size to 4x4, the quality of the generated target physical texture is guaranteed while the sampling efficiency is considered.
In some embodiments, each texture block of the plurality of texture blocks may be a rectangular texture block, and step 530 may include: for each rectangular texture block, four sampling colors are acquired based on the colors of pixels where four vertexes in the rectangular texture block are located. This is schematically illustrated in fig. 7. Specifically, fig. 7 schematically illustrates a texture block 700, which is divided into 4x4 pixels. Four sample colors may be obtained based on the colors of pixels at four angular positions of texture block 700 (i.e., pixels filled in diagonal lines). Although texture block 700 in fig. 7 is shown as a 4x4 texture block, it may in fact be another number of texture blocks. It will be appreciated that within a texture block, the pixel color differences will not typically be too great, and the color at the four corner locations may often represent boundary values for the pixel colors in the texture block, and therefore, by sampling the pixel colors at the four corner locations, it helps to more accurately reflect the color characteristics of the texture block, and to improve the accuracy of subsequent determination of the corrected color corresponding to each pixel based on the sampled color, and thus to improve the accuracy of the generated target physical texture, and to avoid excessive distortion. In addition, by fixedly sampling the colors of the pixels at four corners of each texture block, similarly, the consumption of related decision time can be avoided, and simultaneously, compared with the colors of all pixels in the texture block or the colors of more than four pixels, the color sampling of only four pixels is beneficial to reducing the sampling times and improving the generation efficiency of the target physical texture, and further is beneficial to improving the scene rendering efficiency.
In some embodiments, step 540 may include: determining a color distribution endpoint value corresponding to each texture block based on a sampling color of the texture block; determining a color reference value corresponding to each texture block based on the color distribution endpoint value corresponding to the texture block; based on the color reference value and the sampling color of each texture block, a color correction value corresponding to each pixel in the texture block is determined. For example, two color distribution end points may be determined based on the sampled color of each texture block, which may determine one color axis, or a plurality of color distribution end points may also determine a plurality of color axes. In contrast, determining two color distribution endpoints is more advantageous for improving texture processing speed. In such an example, the color correction value corresponding to each pixel may characterize the specific gravity of the corresponding two color distribution endpoint values in that pixel color. Further, in such an example, the lossless quantization value of the color distribution endpoint value corresponding to each texture block may be determined as a color reference value corresponding to the texture block in order to improve the accuracy of the color reference value, improve the quality of the target physical texture, and thereby help provide a high quality scene rendering effect.
In addition, for each texture block, illustratively, an interpolation color corresponding to each pixel in the texture block may be obtained by interpolating (for example, using linear interpolation or other interpolation methods) the sampling color, then, based on the obtained interpolation color, a color distribution endpoint value may be determined, and further, a color correction value corresponding to each pixel may be determined based on the color distribution endpoint value and the interpolation color. Alternatively, for each texture block, the color distribution endpoint value may be determined based on the sampling color, then the color correction value corresponding to a portion of the pixels (i.e., the portion of the pixels used when obtaining the sampling color) may be determined based on the color distribution endpoint value and the sampling color, and further the color correction value corresponding to each pixel in the texture block may be obtained by interpolating (e.g., linear interpolation or other interpolation method) the color correction values. In contrast, the latter approach is more conducive to reducing the amount of computation, while the color correction values obtained by the two approaches are effectively equivalent when linear interpolation is employed. Accordingly, the exemplary determination of color distribution endpoint values and color correction values will be described in detail below primarily as an example of the latter approach, which may be similarly implemented.
Illustratively, when the sample color is a multi-channel color (e.g., including R, G, B and/or an a-channel, etc.), the color distribution endpoint value may be determined by the following procedure. First, for each color channel of the sampling color, a color mean value corresponding to the color channel may be determined based on the sampling color. For example, for R-channels, R-channel color values for all sampled colors may be counted and the corresponding color mean determined, and G, B and/or a-channels may be similarly processed. Then, a combined degree of deviation of the sampling color from the color average of each color channel can be determined. For example, the forward-direction difference between the R-channel color value of each sample color and the channel color mean (i.e., the absolute value of the difference between the R-channel value of each sample color and the R-channel color mean) may be accumulated for the R-channel as a composite degree of deviation, and G, B and/or a-channels may be similarly processed. Then, the axis corresponding to the color channel with the highest degree of integrated deviation may be determined as the principal axis. For example, if the degree of total deviation for the R channel is highest, the axis for the R channel may be determined as the principal axis. Finally, two end points of the projection of the sampled color onto the principal axis may be determined as color distribution end points. For example, when the axis corresponding to the R channel is determined as the principal axis, each sampling color may be projected onto the R channel axis, and a plurality of projection points may be formed, and two projection points located at the outermost side may be used as color distribution end point values.
For example, the color correction values corresponding to the respective pixels in the texture block may be determined by the following procedure. First, a color correction value corresponding to a portion of pixels in each texture block may be determined based on a color reference value and a sampling color of the texture block; then, the color correction value corresponding to each pixel in the texture block may be determined by interpolating the color correction value corresponding to the partial pixel in the texture block. For example, the weight of the color reference value in each sampling color may be determined based on the color reference value and the sampling color of each texture block, and then the color correction value corresponding to each of the partial pixels in the texture block may be determined based on the weight of the color reference value in each sampling color; alternatively, the difference between each of the sampling colors and the color reference value may be determined as the sampling color correction value based on the color reference value and the sampling color; etc. Illustratively, in an embodiment using the method of determining a color distribution endpoint value described in the previous paragraph, the weight of the color reference value in each sample color may be determined according to the projection position of each sample color on the principal axis. For example, for a certain sample color, the weight of the color reference value in that sample color may be determined based on the ratio of its distance from one color distribution endpoint to the total distance between two color distribution endpoints. Further, for example, linear interpolation may be employed in interpolating the sampled color correction values to determine color correction values corresponding to individual pixels in the texture block. For example, as shown in fig. 7, the interpolated color correction values corresponding to the two middle positions may be obtained by linear interpolation based on the sampled color correction values corresponding to the two end positions of the first row in the lateral direction, and the fourth row may be similarly processed; then, in the longitudinal direction, based on the sampling/interpolation color correction values corresponding to the positions at both ends of each column, interpolation color correction values corresponding to the two middle positions are obtained by linear interpolation, whereby the obtained sampling/interpolation color correction values can be used as the finally determined color correction values corresponding to the colors of the respective pixel points in the texture block.
Illustratively, in the foregoing embodiment, the color correction value may be obtained by quantizing the weight occupied by the color reference value in each sampling color. Specifically, for each sampling color, the weight of the color reference value in the sampling color may be approximately quantized to one of a plurality of preset weight values as the color correction value of the pixel corresponding to the sampling color. It will be appreciated that the weight of the color reference value determined in the foregoing embodiments in the sampling color of the corresponding pixel may be any value between 0 and 1, and in order to further reduce the amount of data that needs to be transmitted and/or stored, the interval [0,1] may be divided into several sub-intervals, thereby generating several end point values, which may be used as a plurality of preset weight values. For example, the weights of the color reference values in the respective sampling colors may be quantized four-dimensionally. That is, the interval [0,1] may be divided into four parts, resulting in five end points of 0, 0.25, 0.5, 0.75 and 1, respectively. In this way, the foregoing weights may be approximated as the closest endpoint values. Further, considering that floating point numbers occupy a large memory space, further, these five endpoint values may be stored using a shaping number of, for example, 0 to 4. Similarly, higher levels of quantization may also be employed to process weights, and when n levels of quantization are employed, the interval [0,1] may be divided into n parts, resulting in (n+1) end points, and these end points may be represented by 0 to n. It will be appreciated that the higher the quantization level, the higher the quantization accuracy, and consequently the more data transmission and/or storage required. Through experimental analysis, applicants have found that employing four levels of quantization can better balance both quantization accuracy and data transmission/storage.
In some embodiments, the color reference values may include low dynamic range color values of a red channel, a green channel, a blue channel, and a transparency channel. Low dynamic range is a concept relative to high dynamic range. The dynamic range may describe the range of color values of the individual color channels, i.e. the range from a minimum value to a maximum value. By way of example, a dynamic range that can be represented in 8 bits or less may be understood as a low dynamic range, and a dynamic range that needs to be represented in 12 bits or more may be understood as a high dynamic range. The low dynamic range generally meets conventional requirements and is more economical in the amount of data transferred/stored and more conducive to increasing data processing speed relative to high dynamic range color values. Further, the color reference value may be determined as, for example, the color distribution endpoint value itself mentioned above without further processing by shifting, adding and subtracting operations, and the like, which may further reduce the operation complexity and improve the texture processing efficiency.
In some embodiments, step 550 may include: and determining the scaling of the target physical texture according to the position and the orientation of the virtual camera. In order to embody a stereoscopic scene, it is generally required to satisfy the visual law of near-large and far-small, i.e., the closer to the user's view point (virtual camera), the larger the object volume. As does the texture. Thus, the scaling thereof can be determined at the time of generating the target physical texture, and the scaled target physical texture can be generated. This approach helps to reduce the data space occupied by the acquired target virtual texture compared to adjusting its scaling when the target virtual texture is acquired, thereby facilitating storage and processing, for example, reducing the amount of blocking and calculation of color reference values and color correction values.
For example, an indirect page table may be maintained, which is schematically shown in fig. 8. As shown in fig. 8, indicated by 810 may be a virtual texture model for sampling to obtain a target virtual texture; indicated by 820 may be an indirect page table (pagetableindex), where each pixel corresponds to a page of a target virtual texture; indicated by 830 may be a page table (PageTable) in which pages corresponding to each target virtual texture may be scaled as needed so that a larger area in the texture (physical texture) may be covered, i.e., it may cover more pages in the texture.
At this time, the resolution of the target virtual texture is not changed, but the effective area occupied by the target virtual texture in the physical texture is enlarged, namely the corresponding target physical texture is enlarged, so that the amplifying effect is objectively achieved, and the information quantity is reduced. Illustratively, the process may be implemented according to a correlation mechanism in AVT. When not combined with the scheme related to reducing the sampling number of the target virtual texture provided by the embodiment of the disclosure, the mechanism is helpful to improve the close-range resolution in the scene, and when the two are combined, the texture processing speed can be improved on the premise of ensuring better close-range resolution.
From the above description, those skilled in the art will appreciate that the above-described scene rendering method 500 may be implemented in connection with various virtual texture techniques (e.g., SVT (Streaming Virtual Texture, streaming virtual texture), RVT, AVT, etc.), and/or may be implemented in connection with various texture compression techniques (e.g., ETC1, ETC2, ASTC, etc.). Fig. 9 illustrates an example flow diagram of a target physical texture generation process 900 in a scene rendering process performed in accordance with an embodiment of the present disclosure, as exemplified in conjunction with ASTC.
Specifically, as shown in fig. 9, at step 910, the 4x4 texture block may be downsampled, for example, according to the embodiments described above, only the colors of the pixels at its four corner positions may be sampled, i.e., the colors of the pixels at the lower left, upper right, lower left, lower right four corner positions are sampled. The 4x4 texture block may be obtained by dividing the target virtual texture according to the foregoing embodiment. At step 920, a color distribution endpoint value for the texture block may be calculated based on the sampled four colors, which may be implemented according to the determination of color distribution endpoint values described in the previous embodiments. In step 930, the sampled four colors may be projected onto the axes formed by the two color distribution endpoint values, respectively, to determine their respective weight values, which may be implemented according to the determination procedure of the weights corresponding to the sampled colors described in the foregoing embodiments. The weights corresponding to the colors of the pixels in the texture block may be obtained by interpolating the weights corresponding to the sampled colors in step 940, which may be implemented according to the linear interpolation process described in the previous embodiments. Finally, in step 950, the color distribution endpoints and weights for the texture block may be encoded and saved and output as the target physical texture. Illustratively, the encoding process may employ BISE (Bounded Integer Sequence Encoding ) encoding. BISE allows the storage of character sequences using arbitrary characters of up to 256 symbols. Each character size is encoded with the most space-efficient bits, triples (trits) and quintuples (quints). Specifically, for a character sequence containing at most 2n-1 characters, encoding can be performed by representing each character by using n bits; for a character sequence containing at most 3 (2 n-1) characters, the coding can be performed by using n bits (m) and a ternary number (t) to represent each character, and each character can be reconstructed to (t+2n) +m; for a character sequence containing at most 5 (2 n-1) characters, the coding can be performed by using n bits (m) and one five-element number (q) to represent each character, and each character can be reconstructed as (q+2n) +m. Thereby, the space/bandwidth required for data storage/transmission can be further compressed.
10A and 10B schematically illustrate experimental results of scene rendering using an ASTC algorithm modified in accordance with some embodiments of the present disclosure, in which downsampling is not employed, but rather, the ASTC algorithm is custom modified in accordance with only some of the embodiments described hereinabove. Specifically, these customization modifications include: each divided texture block is 4x4 pixels in size; two sets of weights are not supported (i.e., the target virtual texture is not further divided into sub-target virtual textures); the color endpoint distribution value adopts an LDR RGBA Direct mode (four channels of low dynamic range and R, G, B, A), and directly uses the color value without further operation processing such as shifting or addition and subtraction; lossless quantization is adopted for color endpoint distribution values; the weights corresponding to the colors of the pixels are quantized in four stages. These customization modifications are all described in detail in the previous embodiments.
In fig. 10A, the left side is an uncompressed texture 1010, the right side is a texture 1014 that is restored after compression using the customized modified ASTC algorithm described above (hereinafter abbreviated as ASTC compressed texture 1014), and the middle is a difference texture 1012 of the two. As can be seen in fig. 10A, there is still some difference between ASTC compressed texture 1014 and uncompressed texture 1010, but the difference texture 1012 exhibits relatively little texture difference compared to the difference between uncompressed texture 410 and ETC compressed texture 414 in fig. 4A. To more clearly show the texture differences before and after compression, the exemplary interface 1000B in fig. 10B presents a series of parameter values similar to fig. 4B, the maximum difference, average error, standard deviation, root mean square error, and peak signal to noise ratio for each color channel and the aggregate of all channels are presented in fig. 10B. As can be seen by comparison, when the customized modified ASTC algorithm is used, the maximum difference, average error, standard deviation and root mean square error of the textures before and after compression are all significantly reduced, and the peak signal-to-noise ratio is significantly increased, compared with the ETC compression algorithm, which clearly shows that the scheme shown in fig. 9 helps to reduce the difference between the compressed restored texture and the uncompressed texture, thereby helping to improve the quality of the compressed texture.
In addition, in fig. 11, a comparative example graph 1100 of processing the same texture using the ETC compression algorithm and using the above-described custom modified ASTC algorithm fused with downsampling is also shown. Specifically, left side plot 1110 is an example of scene surface texture compressed and restored using the ETC (specifically ETC 2) compression algorithm, and right side plot 1120 is an example of scene surface texture compressed and restored using the customized modified ASTC algorithm described above, incorporating downsampling. In contrast, the right side view 1120 is not significantly different from the left side view 1110 in terms of richness and sharpness of texture details, etc. That is, texture compression quality is good when using the above-described custom-modified ASTC algorithm incorporating downsampling, or at least is not compromised relative to the schemes in the related art. At the same time, as previously described, various embodiments described in this disclosure help to improve texture processing efficiency.
Further, to more clearly demonstrate the operational efficiency of a scene rendering process using the ETC compression algorithm and using the customized modified ASTC algorithm described above that incorporates downsampling, fig. 12A-12B schematically illustrate a set of comparative experimental examples.
Fig. 12A schematically illustrates various parameters used in this experiment in the form of a virtual texture setting interface diagram 1200A, which is similar to the interface diagram shown in fig. 3 and will not be described in detail herein. In this experiment, two types of mobile device chips were used, in comparison with chip 2, which had higher overall performance than chip 1, chip 1 was typically used for lower-end mobile devices, and chip 2 was typically used for higher-end mobile devices, with a development engine version UE4.27.1, and experimental results shown in bar graph 1200B of fig. 12B. The column height in the column diagram 1200B reflects the length of texture processing time. Specifically, the length of processing time when using the ETC compression algorithm is represented by diagonal filled columns, and the length of processing time when using the customized modified ASTC algorithm described above, which incorporates downsampling, is represented by dot filled columns. It can be seen that the latter bar graph is significantly lower than the former, i.e. the texture processing efficiency is significantly improved when using the customized modified ASTC algorithm described above, which incorporates downsampling. Furthermore, it will be appreciated that ASTC algorithms are relatively more complex than ETC1, ETC2, etc. algorithms, and therefore, texture processing efficiency may be further improved when the downsampling scheme provided by embodiments of the present disclosure is combined with these algorithms. It should be appreciated that while the experimental results shown in fig. 12B are chip-specific, similar effects may be observed when other chips or processing devices are used.
Fig. 13 schematically illustrates an example interface diagram 1300 provided in accordance with an embodiment of the present disclosure, as indicated by the dashed box, a developer may initiate the foregoing custom modified ASTC algorithm by checking the option, after which, when the relevant application is running on the user's mobile device, a texture compression transformation process from the target virtual texture to the target physical texture may be performed in accordance with the process described in the foregoing embodiment. And, a mode for turning on the downsampling mode, or directly adding ASTC or other compression algorithms for turning on the fusion downsampling, can also be similarly added in the developer's use interface, so that when the application is running on the user's mobile device, the relevant texture processing procedure can be performed according to the aforementioned embodiments. Further, alternatively, a function supporting the scheme provided by the present disclosure may be added in the related art by a form such as an extension packet.
Fig. 14 schematically illustrates an exemplary block diagram of a scene rendering device 1400 provided in accordance with an embodiment of the disclosure. As shown in fig. 14, the scene rendering apparatus 1400 includes a first acquisition module 1410, a division module 1420, a second acquisition module 1430, a determination module 1440, a generation module 1450, and a rendering module 1460. Illustratively, the scene rendering apparatus 1400 may be deployed on the computing device 110, the server 130, or a combination of both shown in fig. 1.
Specifically, the first acquisition module 1410 may be configured to acquire a target virtual texture corresponding to a target scene; the partitioning module 1420 may be configured to partition the target virtual texture into a plurality of texture blocks, wherein each texture block includes a plurality of pixels; the second obtaining module 1430 may be configured to obtain a sampling color for each of the plurality of texture blocks based on a color of a portion of pixels in the plurality of pixels for the texture block; the determining module 1440 may be configured to determine, based on a sampling color of each texture block of the plurality of texture blocks, a color reference value corresponding to the texture block and a set of color correction values corresponding to the texture block, wherein the set of color correction values corresponding to the texture block includes color correction values corresponding to respective pixels in the texture block; the generating module 1450 may be configured to generate a target physical texture corresponding to the target virtual texture based on the set of color reference values and color correction values corresponding to each of the plurality of texture blocks; rendering module 1460 may be configured to render a target scene based on a target physical texture.
It should be appreciated that the scene rendering device 1400 may be implemented in software, hardware, or a combination of software and hardware. The different modules may be implemented in the same software or hardware structure or one module may be implemented by different software or hardware structures.
In addition, the scene rendering device 1400 may be used to implement the method 500 described above, the relevant details of which have been described in detail above and are not repeated here for brevity. The scene rendering device 1400 may have the same features and advantages as described with respect to the previous method.
Fig. 15 schematically illustrates an example block diagram of a computing device 1500 according to some embodiments of the application. For example, which may represent the terminal device 110 in fig. 1 or other type of computing device that may be used to deploy the scene rendering apparatus 1400 provided by the present application.
As shown, the example computing device 1500 includes a processing system 1501, one or more computer-readable media 1502, and one or more I/O interfaces 1503 communicatively coupled to each other. Although not shown, computing device 1500 may also include a system bus or other data and command transfer system that couples the various components to one another. A system bus may include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures, or may further include such as control and data lines.
The processing system 1501 represents functionality to perform one or more operations using hardware. Thus, the processing system 1501 is illustrated as including hardware elements 1504 that may be configured as processors, functional blocks, and the like. This may include implementing application specific integrated circuits in hardware or other logic devices formed using one or more semiconductors. The hardware element 1504 is not limited by the materials from which it is formed or the processing mechanisms employed therein. For example, the processor may be comprised of semiconductor(s) and/or transistors (e.g., electronic Integrated Circuits (ICs)). In such a context, the processor-executable instructions may be electronically-executable instructions.
Computer-readable medium 1502 is illustrated as including memory/storage 1505. Memory/storage 1505 represents memory/storage associated with one or more computer-readable media. Memory/storage 1505 may include volatile storage media (such as Random Access Memory (RAM)) and/or nonvolatile storage media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). Memory/storage 1505 may include fixed media (e.g., RAM, ROM, a fixed hard drive, etc.) and removable media (e.g., flash memory, a removable hard drive, an optical disk, and so forth). By way of example, memory/storage 1505 may be used to store texture data and the like as mentioned in the embodiments above. The computer-readable medium 1502 may be configured in a variety of other ways as described further below.
One or more input/output interfaces 1503 represent functionality that allows a user to enter commands and information to the computing device 1500, and that also allows information to be presented to the user and/or sent to other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone (e.g., for voice input), a scanner, touch functionality (e.g., capacitive or other sensors configured to detect physical touches), a camera (e.g., motion that does not involve touches may be detected as gestures using visible or invisible wavelengths such as infrared frequencies), a network card, a receiver, and so forth. Examples of output devices include a display device (e.g., a display or projector), speakers, a printer, a haptic response device, a network card, a transmitter, and so forth. For example, in the embodiments described above, the user may be allowed to perform various interactive operations and the like for a scene, such as moving in a scene, converting a field of view, and the like, through an input device, and the user may be allowed to view a rendered scene and the like, through an output device.
The computing device 1500 also includes a scene rendering application 1506. The scene rendering application 1506 may be stored as computer program instructions in the memory/storage 1505. The scene rendering application 1506 may implement all of the functionality of the various modules of the apparatus 1400 described with respect to fig. 14, along with the processing system 1501 and the like.
Various techniques may be described herein in the general context of software, hardware, elements, or program modules. Generally, these modules include routines, programs, objects, elements, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and the like as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
An implementation of the described modules and techniques may be stored on or transmitted across some form of computer readable media. Computer-readable media can include a variety of media that are accessible by computing device 1500. By way of example, and not limitation, computer readable media may comprise "computer readable storage media" and "computer readable signal media".
"computer-readable storage medium" refers to a medium and/or device that can permanently store information and/or a tangible storage device, as opposed to a mere signal transmission, carrier wave, or signal itself. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media include hardware such as volatile and nonvolatile, removable and non-removable media and/or storage devices implemented in methods or techniques suitable for storage of information such as computer-readable instructions, data structures, program modules, logic elements/circuits or other data. Examples of a computer-readable storage medium may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, hard disk, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture adapted to store the desired information and which may be accessed by a computer.
"computer-readable signal medium" refers to a signal bearing medium configured to transmit instructions to hardware of computing device 1500, such as via a network. Signal media may typically be embodied in computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, data signal, or other transport mechanism. Signal media also include any information delivery media. By way of example, and not limitation, signal media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
As previously described, the hardware elements 1504 and the computer-readable medium 1502 represent instructions, modules, programmable device logic, and/or fixed device logic implemented in hardware that, in some embodiments, may be used to implement at least some aspects of the techniques described herein. The hardware elements may include integrated circuits or components of a system on a chip, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), complex Programmable Logic Devices (CPLDs), and other implementations in silicon or other hardware devices. In this context, the hardware elements may be implemented as processing devices that perform program tasks defined by instructions, modules, and/or logic embodied by the hardware elements, as well as hardware devices that store instructions for execution, such as the previously described computer-readable storage media.
Combinations of the foregoing may also be used to implement the various techniques and modules described herein. Thus, software, hardware, or program modules and other program modules may be implemented as one or more instructions and/or logic embodied on some form of computer readable storage medium and/or by one or more hardware elements 1504. Computing device 1500 may be configured to implement particular instructions and/or functions corresponding to software and/or hardware modules. Thus, for example, by using the computer-readable storage medium of the processing system and/or the hardware element 1504, a module can be implemented at least in part in hardware as a module executable by the computing device 1500 as software. The instructions and/or functions may be executed/operable by, for example, one or more computing devices 1500 and/or processing systems 1501 to implement the techniques, modules, and examples described herein.
The techniques described herein may be supported by these various configurations of computing device 1500 and are not limited to the specific examples of techniques described herein.
It will be appreciated that for clarity, embodiments of the application have been described with reference to different functional units. However, it will be apparent that the functionality of each functional unit may be implemented in a single unit, in a plurality of units or as part of other functional units without departing from the application. For example, functionality illustrated to be performed by a single unit may be performed by multiple different units. Thus, references to specific functional units are only to be seen as references to suitable units for providing the described functionality rather than indicative of a strict logical or physical structure or organization. Thus, the application may be implemented in a single unit or may be physically and functionally distributed between different units and circuits.
The present application provides a computer readable storage medium having stored thereon computer readable instructions that, when executed, implement the above-described scene rendering method.
The present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computing device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computing device performs the scene rendering method provided in the above-described various embodiments.
Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (17)

1. A scene rendering method, comprising:
Obtaining a target virtual texture corresponding to a target scene;
dividing the target virtual texture into a plurality of texture blocks, wherein each texture block comprises a plurality of pixels;
acquiring a sampling color of each texture block based on colors of partial pixels in the plurality of pixels of the texture block;
determining a color reference value corresponding to each texture block and a color correction value set corresponding to the texture block based on the sampling color of each texture block in the plurality of texture blocks, wherein the color correction value set corresponding to the texture block comprises color correction values corresponding to pixels in the texture block;
generating a target physical texture corresponding to the target virtual texture based on a set of color reference values and color correction values corresponding to each of the plurality of texture blocks;
rendering the target scene based on the target physical texture.
2. The method of claim 1, wherein the determining, based on the sampled color of each texture block of the plurality of texture blocks, a color reference value corresponding to the texture block and a set of color correction values corresponding to the texture block comprises:
determining a color distribution endpoint value corresponding to each texture block based on the sampling color of the texture block;
Determining a color reference value corresponding to each texture block based on the color distribution endpoint value corresponding to the texture block;
based on the color reference value of each texture block, a color correction value corresponding to each pixel in the texture block is determined.
3. The method of claim 2, wherein the determining a color correction value corresponding to each pixel in each texture block based on the color reference value of the texture block comprises:
determining a color correction value corresponding to the partial pixels in each texture block based on the color reference value and the sampling color of the texture block;
and determining the color correction value corresponding to each pixel in the texture block by interpolating the color correction value corresponding to the partial pixel in the texture block.
4. A method according to claim 3, wherein said determining a color correction value corresponding to said portion of pixels in each texture block based on the color reference value and the sampling color of the texture block comprises:
determining a weight of the color reference value in each sampling color based on the color reference value and the sampling color of each texture block;
and determining a color correction value corresponding to each pixel in the partial pixels in the texture block based on the weight occupied by the color reference value in each sampling color.
5. The method of claim 4, wherein the determining a color correction value corresponding to each of the partial pixels in the texture block based on the weight of the color reference value in each sampling color comprises:
for each sampling color, approximately quantizing the weight of the color reference value in the sampling color to one preset weight value in a plurality of preset weight values as a color correction value of a pixel corresponding to the sampling color.
6. The method of claim 2, wherein the sampling color is a multi-channel color, and wherein the determining, based on the sampling color of each texture block of the plurality of texture blocks, a color distribution endpoint value for the texture block comprises:
for each color channel of the sampling color, determining a color mean value corresponding to the color channel;
determining the comprehensive deviation degree of the sampling color relative to the color mean value of each color channel;
determining an axis corresponding to the color channel with the highest comprehensive deviation degree as a main shaft;
two end points of the projection of the sampling color on the principal axis are determined as the color distribution end points.
7. The method of claim 2, wherein the determining a color reference value corresponding to each texture block based on the color distribution endpoint value corresponding to the texture block comprises:
The lossless quantization value of the color distribution endpoint value corresponding to each texture block is determined as the color reference value corresponding to the texture block.
8. The method of claim 1, wherein each texture block of the plurality of texture blocks is a rectangular texture block, and wherein the obtaining a sampling color for each texture block of the plurality of texture blocks based on a color of a portion of pixels of the plurality of pixels of the texture block comprises:
for each rectangular texture block, four sampling colors are acquired based on the colors of pixels where four vertexes in the rectangular texture block are located.
9. The method of claim 1, wherein the dividing the target virtual texture into a plurality of texture blocks comprises:
dividing the target virtual texture into a plurality of texture blocks with preset sizes.
10. The method of claim 9, wherein the preset size is 4 x 4 pixels.
11. The method of claim 1, wherein the color reference values comprise low dynamic range color values of a red channel, a green channel, a blue channel, and a transparency channel.
12. The method of claim 1, wherein the acquiring a target virtual texture corresponding to a target scene comprises:
Sampling a virtual texture model based on a position and an orientation of a virtual camera corresponding to the target scene;
the target virtual texture is generated based on the sampling result of the virtual texture model.
13. The method of claim 12, wherein the generating a target physical texture corresponding to the target virtual texture based on the set of color reference values and color correction values corresponding to each texture block of the plurality of texture blocks comprises:
and determining the scaling of the target physical texture according to the position and the orientation of the virtual camera.
14. A scene rendering device, comprising:
the first acquisition module is configured to acquire a target virtual texture corresponding to a target scene;
a partitioning module configured to partition the target virtual texture into a plurality of texture blocks, wherein each texture block comprises a plurality of pixels;
a second acquisition module configured to acquire a sampling color of each of the texture blocks based on a color of a part of pixels in the plurality of pixels of the texture block;
a determining module configured to determine, based on a sampling color of each texture block of the plurality of texture blocks, a color reference value corresponding to the texture block and a color correction value set corresponding to the texture block, wherein the color correction value set corresponding to the texture block includes color correction values corresponding to respective pixels in the texture block;
A generation module configured to generate a target physical texture corresponding to the target virtual texture based on a set of color reference values and color correction values corresponding to each of the plurality of texture blocks;
a rendering module configured to render the target scene based on the target physical texture.
15. A computing device, comprising:
a memory configured to store computer-executable instructions;
a processor configured to perform the method according to any one of claims 1 to 13 when the computer executable instructions are executed by the processor.
16. A computer readable storage medium storing computer executable instructions which, when executed, perform the method of any one of claims 1 to 13.
17. A computer program product comprising computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 13.
CN202210301604.4A 2022-03-25 2022-03-25 Scene rendering method and device, computing device, storage medium and program product Pending CN116843736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210301604.4A CN116843736A (en) 2022-03-25 2022-03-25 Scene rendering method and device, computing device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210301604.4A CN116843736A (en) 2022-03-25 2022-03-25 Scene rendering method and device, computing device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116843736A true CN116843736A (en) 2023-10-03

Family

ID=88171215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210301604.4A Pending CN116843736A (en) 2022-03-25 2022-03-25 Scene rendering method and device, computing device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116843736A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392251A (en) * 2023-12-06 2024-01-12 海马云(天津)信息技术有限公司 Decoding performance optimization method for texture data in ASTC format in Mesa 3D graphics library

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392251A (en) * 2023-12-06 2024-01-12 海马云(天津)信息技术有限公司 Decoding performance optimization method for texture data in ASTC format in Mesa 3D graphics library
CN117392251B (en) * 2023-12-06 2024-02-09 海马云(天津)信息技术有限公司 Decoding performance optimization method for texture data in ASTC format in Mesa 3D graphics library

Similar Documents

Publication Publication Date Title
US11595653B2 (en) Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones
US10904564B2 (en) Method and apparatus for video coding
US10291926B2 (en) Method and apparatus for compressing and decompressing data
US9147264B2 (en) Method and system for quantizing and squeezing base values of associated tiles in an image
EP2204045A1 (en) Method and apparatus for compressing and decompressing data
WO2006048961A1 (en) Drawing device and drawing method
US20230108967A1 (en) Micro-meshes, a structured geometry for computer graphics
US7171051B1 (en) Method and apparatus for performing fixed blocksize compression for texture mapping
US20130033513A1 (en) Texture compression and decompression
CN116843736A (en) Scene rendering method and device, computing device, storage medium and program product
US8942474B2 (en) Method and system for interpolating index values of associated tiles in an image
US11263786B2 (en) Decoding data arrays
US7136077B2 (en) System, method, and article of manufacture for shading computer graphics
CN117280680A (en) Parallel mode of dynamic grid alignment
KR20230146629A (en) Predictive coding of boundary geometry information for mesh compression
CN110956670A (en) Multi-mode self-adaptive Z value compression algorithm based on depth migration
CN116137674B (en) Video playing method, device, computer equipment and storage medium
US12028539B2 (en) Generating multi-pass-compressed-texture images for fast delivery
US9064347B2 (en) Method, medium, and system rendering 3 dimensional graphics data considering fog effect
US20230291917A1 (en) Generating multi-pass-compressed-texture images for fast delivery
CN115170712A (en) Data processing method, data processing apparatus, storage medium, and electronic apparatus
KR20240093874A (en) Dynamic mesh compression using inter and intra prediction
WO2023275534A1 (en) Foveated imaging
CN117083636A (en) Predictive coding of boundary geometry information for grid compression
CN116848553A (en) Method for dynamic grid compression based on two-dimensional UV atlas sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination