WO2022120295A1 - Non-invasive graphics acceleration via directional pixel/screen decimation - Google Patents

Non-invasive graphics acceleration via directional pixel/screen decimation Download PDF

Info

Publication number
WO2022120295A1
WO2022120295A1 PCT/US2021/064175 US2021064175W WO2022120295A1 WO 2022120295 A1 WO2022120295 A1 WO 2022120295A1 US 2021064175 W US2021064175 W US 2021064175W WO 2022120295 A1 WO2022120295 A1 WO 2022120295A1
Authority
WO
WIPO (PCT)
Prior art keywords
detail value
mobile device
shading rate
value
geometry
Prior art date
Application number
PCT/US2021/064175
Other languages
French (fr)
Inventor
Hongyu Sun
Chen Li
Steven Jackson
Original Assignee
Innopeak Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology, Inc. filed Critical Innopeak Technology, Inc.
Priority to PCT/US2021/064175 priority Critical patent/WO2022120295A1/en
Publication of WO2022120295A1 publication Critical patent/WO2022120295A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed
    • G09G2340/145Solving problems related to the presentation of information to be displayed related to small screens
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/02Graphics controller able to handle multiple formats, e.g. input or output formats
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/08Power processing, i.e. workload management for processors involved in display operations, such as CPUs or GPUs

Definitions

  • the disclosed technology relates generally to improved image rendering. Particularly, the disclosed technology relates to intercepting image rendering function calls from a mobile gaming application at a mobile device to an application programming interface (API) internal to the mobile device, and computing image values (e.g., shading rate, etc.) that quantify a visually acceptable image quality loss.
  • API application programming interface
  • Mobile gaming requires intensive computing resources at a mobile device to run the gaming application.
  • the backend computing system that generates the gaming application also requires massive computational power to generate and produce the game, especially in comparison to the mobile device.
  • these mobile games are increasingly being developed with three-dimensional (3D) technology instead of two- dimensional (2D), which is typically much more resource intensive with additional complexities for capturing multiple points of view and data.
  • the objects in the mobile games often have vastly different texture patterns, ranging from the subtle gradient of a blue sky to the sharply chromatic patterns of a butterfly's wings. Yet each of these images require the same amount of pixels.
  • the backend computing system e.g., GPU processor, etc.
  • the backend computing system can spend an equal amount of computational processing power to generate either object. Improvements to processing power and power consumption can be made without losing the detail that mobile gaming users look for in these gaming applications.
  • FIG. 1 provides illustrations of an advanced rendering system on a mobile device, in accordance with some embodiments of the application.
  • FIG. 2 provides an illustrative payload of an API call with intrinsic and/or extrinsic parameters, in accordance with some embodiments of the disclosure.
  • FIG. 3 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
  • FIG. 4 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
  • FIG. 5 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
  • FIG. 6 illustrates one or more shading rate processes for an illustrative pixel, in accordance with embodiments of the application.
  • FIG. 7 illustrates a computing component for providing advanced rendering, in accordance with embodiments of the application.
  • FIG. 8 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.
  • the number of polygons and vertices have an approximate upper bound of the physical size of the screen at the mobile device.
  • the mobile device can approximate the amount of data that is received or processed to generate these polygons and vertices in order to display images and otherwise execute the application.
  • a point cloud may be generated of the same geometry (e.g., the point cloud corresponding with a 3D scanning process of a physical environment that creates digital data points of the 3D space that exist along the x, y, and z vertices).
  • Shading data for traditional image rendering may be extensive and detailed.
  • a vertex shader in the application may implement geometric transformations to a screen space, a tangent space, etc. to generate large amounts of shading data corresponding to each of the images.
  • increasing the screen resolution to approach a theoretical limit of the resolution can equal massive (e.g., quadratic) growth of the number of pixels to be shaded. This corresponds with adding more data without have the benefit of an improved user experience or improved graphic display.
  • shading can be continuously more complex and require large amounts of processing power to generate increasingly large amounts of data.
  • lighting data may be somewhat infinite and very resource intensive (e.g., based on lighting sources like direct light originating from single source, like a lamp or the sun, light bouncing off metallic surfaces, etc.).
  • some traditional systems implement pixel reduction or removal that can have some improvement on the capability of the processor. For example, traditional systems may mitigate these problems with additional rendering passes that perform extra calculations. For instance, a "pre-z" or "depth only pass” may be employed to calculate front- most geometry that is not hidden. Multiple post-processing passes can be implemented to render geometry with low frequency textures into a lower resolution screen. After the postprocessing passes, the rendered geometry may be up-scaled (e.g., by implementing a blur pass filter and providing the altered images to a higher resolution screen, etc.).
  • shading processes can include lookup tables, transferring calculations from a pixel shader to a vertex shader, and the like.
  • the shading processes may be designed for specific scenarios and no one recipe can solve the problem in general.
  • MSAA Multi-Sampling Anti- Aliasing
  • VRS Variable Rate Shading
  • MSAA can reduce jagged edges without having to resort to super sampling.
  • the MSAA technique may be limited to components of the image (e.g., edges, etc.).
  • the VRS technique can expand on MSAA and allow finer control over the surface of each geometry.
  • VRS may not be enabled to adapt to the plethora of geometric conditions in a scene.
  • MSAA and VRS are techniques that can reduce the amount of processed pixels by coverage sampling (e.g., MSAA) or explicit overwrite of shading rate (e.g., VRS).
  • coverage sampling e.g., MSAA
  • VRS shading rate
  • a graphics developer can create the computer-implemented instructions to render each geometry at lower resolution than a default value. In specific situations, such as for geometry with low frequency textures, the reduced resolution may not yield noticeable quality reduction, yet reduces the amount of processing required to implement the pixel shader steps.
  • MSAA and VRS have limitations.
  • MSAA may solve the problem of converting sharp and stair pixel patterns due to low resolution by turning the sharp stair patterns into smooth edges. This process does not really increase the resolution of the image, but rather implements some visually acceptable shortcuts to image processing at a slightly more memory cost.
  • VRS may solve the problem of a full resolution image taking too long to draw at the user interface. VRS works by drawing an area on screen quickly with a half to a quarter resolution (e.g., like 1080p in a television environment) instead of slowly with full resolution (e.g., like 2K in a television environment), and then upscale the half resolution to a quarter resolution in the full display area while preserving the general look of that area. This can be detrimental to the image, since there can be some visual degradations that can be controllable by implementing threshold values.
  • multiple rendering passes may be implemented. For example, they can render to a half resolution screen and then blur to full resolution screen. This may cause a noticeable reduction in image quality and may not be effective if the scene contains high-frequency textures or pixels. It may also limit multiple render targets (e.g., polygons or pixels to render for image processing) that are displayed. These render targets may be moved off screen with differing resolutions, which can be computationally inefficient and inconvenient. The multiple targets may also require extensive customization from a software developer to design or select a technique most suitable for the particular scene complexity and characteristics being worked on. [0025] Better, more coherent methods of image rendering for mobile gaming applications are needed.
  • render targets e.g., polygons or pixels to render for image processing
  • embodiments of the application can intercept one or more application programming interface (API) calls to determine intrinsic or extrinsic parameters for rendering a digital image on a mobile device (e.g., user device, laptop, mobile phone, etc.).
  • the intrinsic parameters may include, for example, span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity, material anisotropy, and the like.
  • the extrinsic parameters may include, for example, dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size, density in screen space, and the like.
  • the data of the image may be analyzed to determine a detail value of the intrinsic or extrinsic parameters for rendering a digital image.
  • the system may optimize rendering of the original intrinsic or extrinsic parameter by adjusting the parameter. Otherwise, the intrinsic or extrinsic parameter may remain the same.
  • This new intrinsic or extrinsic parameter can be determined in preprocessing to improve the user experience and add functionality to the mobile application, which was not originally provided by the software developer or included in the original software application.
  • the image processing at the mobile device can be adjusted from the original resolution value to the lower resolution, without rendering the images originally provided by the user application.
  • the images that are far away, very detailed, and/or not as noticeable may be blurred while the image quality of the objects that are closer to the point of view of the user may remain untouched.
  • This may improve data processing for applications at the mobile device, improving rendering and processing for mobile gaming and other processes, without drastically affecting the noticeable image quality of the mobile gaming application.
  • Other technical improvements are realized as well.
  • the gaming programmer that creates the mobile gaming application does not need to incorporate these computational processing improvements with the gaming application, since they may be incorporated by benevolently hijacking an API call by the complete gaming application to the operating system, as described herein.
  • FIG. 1 provides illustrations of an advanced rendering system, in accordance with some embodiments of the application.
  • An illustrative mobile device 100 is provided.
  • Advanced rendering system 102 is installed on mobile device 100 among its computational layers, including application 110, graphics application 112, hardware user interface (HWUI) 114, driver(s) 116, and operating system 118.
  • application 110, graphics application 112, HWUI 114, driver(s) 116, and operating system 118 may be embedded with mobile device 100 to generate a mobile gaming environment, and can be implemented in different environments using features described throughout the disclosure.
  • Application 110 may comprise a software application executed using computer executable instructions on mobile device 100.
  • Application 110 may interact with the user interface of mobile device 100 to display received information to the user.
  • application 110 may include an electronic game or other software application that is operated by mobile device 100 to provide images via the display of mobile device 100.
  • Graphics application 112 may comprise a computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics (e.g., OpenGL for Embedded Systems, OpenGL ES, GLES, etc.). Graphics application 112 may be designed for embedded systems like smartphones, video game consoles, mobile phones, or other user devices.
  • API application programming interface
  • HWUI 114 may include a library that enables user interface (Ul) components to be accelerated using the processor (e.g., GPU, CPU, etc.). HWUI 114 may correspond with an accelerated rendering pipeline for images and other data. In some mobile device 100 models (e.g., non-Android® models, etc.), HWUI 114 may be removed without diverting from the essence of the disclosure.
  • Driver(s) 116 may comprise a computer program that operates or controls the processor by providing a software interface to a hardware device. Driver(s) 116 may enable operating system 118 of mobile device 100 to access hardware functions without encoding precise details about the hardware being used.
  • the processor may include a specialized hardware engine to execute machine readable instructions and perform methods described throughout the disclosure.
  • the processor corresponds with a Graphics Processing Unit (GPU) to accelerate graphics operations and/or perform parallel, graphics operations.
  • GPU Graphics Processing Unit
  • Other processing units may be implemented without diverting from the scope of the application (e.g., central processing unit (CPU), etc.).
  • Advanced rendering system 102 can be provided as an interception layer between mobile game related libraries and drivers at graphics application 112. Advanced rendering system 102 may be invoked automatically and/or when an interface tool is activated (e.g., selecting "activate” or providing a predetermined visual effect from the Ul in game assistant, etc.).
  • an interface tool e.g., selecting "activate” or providing a predetermined visual effect from the Ul in game assistant, etc.
  • an application programming interface (API) call from graphics application 112 to the processor (or some other hardware in the system) may be intercepted and a customized graphics output may be provided to operating system 118 in its place.
  • API application programming interface
  • advanced rendering system 102 may modify normal layer behavior of graphics application 112 (e.g., modify OpenGL by using Android® 10+ GLES Layers system, etc.).
  • advanced rendering system 102 may recompile it to use with application 110.
  • the image detail effect may reduce the resolution of an object to create an object that requires less data, which is transmitted an OpenGL application programming interface (API) to create a rendered image in application 110.
  • API OpenGL application programming interface
  • Advanced rendering system 102 may be installed on mobile device 100 as a transparent graphic framework.
  • the interception layer may correspond with a pre-processing mechanism that does not depend on the game engine. Advanced rendering system 102 may reduce an image quality of the images using a variety of methods discussed herein.
  • Various system properties may be altered as well.
  • the "debug. gles. layers" system property may be changed to reference a parameter associated with advanced rendering system 102. This parameter may redirect processing from the predefined application to advanced rendering system 102. This may effectively cause application 110 to call a specific OpenGL wrapper of advanced rendering system 102 instead of the default implementation. Once advanced rendering system 102 provides the parameter and redefined API calls, the application may forward the processing back to the default implementation of OpenGL.
  • Transparent graphic framework 104 may correspond with a software framework embedded at the operating system layer 118 in mobile device 100. Transparent graphic framework 104 may benevolently hijack the rendering pipeline used by application 110. Application 110 may operate normally and may have no knowledge of transparent graphic framework 104, nor would the logic or binary data of application 110 be modified.
  • Application 110 may generate and transmit API calls that comprise one or more texture parameters, matrices, and other object definitions that can be observed by transparent graphic framework 104.
  • Transparent graphic framework 104 may modify the parameters and/or generate additional API calls to be executed to add additional benefit to the overall system.
  • Transparent graphic framework 104 may include an effect detail engine 106, graphic API 107, and object adjustment engine 108.
  • Effect detail engine 106 may receive the payload values from the API call.
  • the payload values may define parameters of one or more objects stored with application 110.
  • the payload values may include intrinsic and/or extrinsic parameters, including lighting (e.g., shades, shadows, directional light, etc.), distance, image perspectives, materials, textures, or other image wrappers and libraries, as illustrated with FIG. 2.
  • FIG. 2 provides an illustrative payload of an API call with intrinsic and/or extrinsic parameters, in accordance with some embodiments of the disclosure.
  • API call 200 may be transmitted from application 110 to operating system 118 as part of pre-processing.
  • API call 200 may comprise intrinsic parameters 210 and/or extrinsic parameters 220.
  • Intrinsic parameters 210 may correspond with internal parameters related to an object or camera capturing an image of the object.
  • Examples of intrinsic parameters may include, for example, focal length, lens distortion, span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity and/or anisotropy, and the like.
  • Extrinsic parameters 220 may correspond with external parameters of the object or camera that are used to describe the transformation between the camera and its external world.
  • extrinsic parameters may include, for example, dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size and density in screen space, and the like.
  • intrinsic parameters 210 and/or extrinsic parameters 220 may be captured by transparent graphic framework 104, wrapped with adjusted information and provided to operating system 118 as output 230.
  • Output 230 may include updated parameters to change the images rendered by application 110.
  • Output 230 from transparent graphic framework 104 may be provided to the display screen of mobile device 100. The display screen may present the rendered images at mobile device 100.
  • the goal of adjusting intrinsic parameters 210 and/or extrinsic parameters 220 is to reduce pixel computations without significantly reducing the quality of rendered graphics.
  • an albedo texture map from intrinsic parameters 210 may be a contributor to an image's appearance (e.g., selected as the rendering function to alter the image). Adjustment of the albedo texture map may adjust the corresponding geometry of the object. In another example, distance and orientation of the geometry as viewed by the camera from extrinsic parameters 220 may significantly affect the texture pattern of the object.
  • intrinsic parameters 210 and/or extrinsic parameters 220 may affect texture maps. Texture maps may include intrinsic parameters 210, for example, including normal, metallic, roughness, etc. textures of a particular object. These intrinsic parameters 210 may be affected by lighting directions as well as one or more view positions (extrinsic parameters 220). In some examples, a set of lighting directions can be an additional dimension in the permutation of input data.
  • rendering during the pre-processing phase may also consider the effect on other image attributes. For example, the final rendering may adjust distances, orientations, and the like.
  • each geometric object in a scene may be identified.
  • Transparent graphic framework 104 may render the identified objects, rotate the objects every thirty-degrees for each dimension of the Euler angles, and store each of these pre-renderings in object data store 105.
  • the pre-renderings may be discrete samplings of possible orientations of each object in a virtual scene.
  • transparent graphic framework 104 may render the object, vary the position of the object relative to the camera in a virtual scene (e.g., in discrete samplings of the position of the object, etc.), and store each of these pre-renderings in object data store 105.
  • the default rendering of this object may fill the pixels contained by the silhouette of the object with one or more sampled texels (e.g., an array of data in texture space, etc.).
  • these texels may correspond with a particular mipmap level corresponding to the texture, according to one or more internal screen space derivatives.
  • each object orientation may be rendered twice, including a first time with standard shading computation and a second time with textures (or other intrinsic parameters 210).
  • the textures may correspond with a sampling forced to the first mipmap level (e.g., the most high frequency level).
  • Transparent graphic framework 104 may determine the per pixel differences between these two rendered images correspond to various quality changes. For example, if the detail values (e.g., pixel) at a pixel location from these two images differ by a first threshold value, then a shading rate may match the higher mipmap level.
  • the first threshold value may be zero (e.g., the two images are exactly the same) or a predetermined value (e.g., less than 1%) or may have a statistical basis (e.g., a "normal" apple is about the size of a human fist, whereas a "large” or "very large” apple would need to be larger by a threshold value than the size of a fist).
  • a shading rate may match the lower mipmap level.
  • the second threshold value be a predetermined value (e.g., greater than 50%) or may have a statistical basis (e.g., a normal apple is about the size of a human fist, whereas a "large” or "very large” apple would need to be larger by a threshold value than the size of a fist).
  • Transparent graphic framework 104 may determine the shading rate for a particular pixel based on this comparison process (e.g., the higher mipmap level for matching renderings and a lower mipmap level for renderings that are very different, etc.).
  • the higher mipmap level of a pixel may correspond with a 4x4 rendering and the lower mipmap level of a pixel may correspond with a lxl rendering.
  • the higher mipmap level of a pixel may correspond with a lower shading rate. This may allow the shading rate to remain uniform.
  • additional information may be received and analyzed, like the screen space rate of change at each pixel.
  • the additional information may be included in the API call 200.
  • the analysis of the additional information may be accomplished via derivative calculations (e.g., ddx() ddy() instructions, etc.) to determine a directionally dependent rate of change of the shading rate (e.g., detail value), which can further determine if 4x1 or 1x4 shading rates are more valid than 4x4.
  • the comparison may be implemented with all pixels of the image to determine sensible shading rates for all pixels.
  • the information may be stored in a data store associated with transparent graphic framework 104.
  • histograms of shading rates may be generated. The histogram may be used to select the shading rate with the most samples.
  • Lighting adjustments of the object may also be determined and stored in object data store 105. For example, while lighting may drastically change the appearance of an object, there many mobile graphics applications that do not include object lighting or shadowing by design in order to conserve computation. As such, lighting (as an intrinsic parameter 210) may be ignored.
  • transparent graphic framework 104 may either defer back to the default lxl rendering case or adopt lighting directions as an additional parameter when pre-processing.
  • the pre-calculated sets of quality loss percentages may be retrieved at runtime. For example, at runtime, it could be straight-forward for the system to look up a shading rate given intrinsic parameters 210 and/or extrinsic parameters 220.
  • the pre-calculated set of object data may be simplified and the rendering can interpolate from the nearest data points around a given set of intrinsic parameters 210 and/or extrinsic parameters 220.
  • Quality degradation may be acceptable if computer processing requirements are reduced and the degradation of the image is further defined in both horizontal and vertical screen space directions.
  • the quantified difference may be input to the variable rate shading APIs to produce one or more directional varying rendered resolutions.
  • the quality degradation may be acceptable because of the limitation in sight of the perceived image by a typical human observer.
  • transparent graphics layer 104 also comprises graphic API 107.
  • Graphic API 107 may determine parameters of one or more objects stored with application 110. For example, when the parameter is not defined by application 110, graphic API 107 may determine the parameter value and correlate the new parameter value with the object. The parameter value may be stored with a unique identifier for the object in object data store 105.
  • Object adjustment engine 108 may update one or more parameters associated with the intercepted API call and provide the output to operating system 118 to provide to the display screen of mobile device 100.
  • FIG. 3 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
  • a monochromatic pipe object 300 (illustrated as object 300A and 300B) is rotated so one of its ends extends into the distance.
  • Illustration 310 shows the object 300A maintaining the shading and detail as the pipe moves visually farther from the user in the display screen.
  • Illustration 320 shows the object 300B losing some of the shading and detail as it moves farther away, in accordance with the image adjustments made by transparent graphic framework 104.
  • This is an example of the geometry of the object 300 (intrinsic parameters 210) viewed in a particular orientation (extrinsic parameters 220).
  • the derivative calculations may have increasingly large ddx() and ddy() values as the object 300B moves farther away. Since its texture is monochromatic, the shading rates would remain uniform and be as high as 2x2 or 4x4.
  • FIG. 4 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
  • a black and white soccer ball object 400 (illustrated as object 400A and 400B) is provided in initial illustration 410.
  • the derivative calculations may have increasingly large ddx() and ddy() values towards the contour.
  • the mipmap pixel value differences may be small, except at where the black and white parts meet. Since the renderings essentially match, a 4x4 shading rate is reasonable in this example.
  • the main intrinsic parameter 210 may correspond with texture.
  • transparent graphic framework 104 may determine the shading rate for a particular pixel of the soccer ball object 400 based on the comparison process described herein (e.g., the higher mipmap level for matching renderings and a lower mipmap level for renderings that are very different, etc.).
  • the higher mipmap level of a pixel may correspond with a 4x4 rendering and the lower mipmap level of a pixel may correspond with a lxl rendering.
  • FIG. 5 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
  • a black and white zebra is provided in initial illustration 510.
  • a side view of the zebra may identify a texture pattern (e.g., intrinsic parameter 210). Due to the vertical striping pattern, the ddx() value likely is much larger than ddy() for most pixels.
  • transparent graphic framework 104 may determine a non- uniform shading rate (e.g., 1x2 or 1x4, etc.).
  • FIG. 6 illustrates one or more shading rate processes for an illustrative pixel, in accordance with embodiments of the application.
  • transparent graphic framework 104 may determine a first shading rate for pixel 600 (e.g., 4x4, 3x3, 2x2, or lxl) and then determine if a second shading rate for pixel 600 is a better choice (e.g., 4x2, 2x4, 1x4, or 4x1).
  • thresholds in which a shading rate changes are not constant.
  • the shading rate changes in the X-direction from lxl to 2x1 or 2x1 to 4x1.
  • the shading rate changes in the Y-direction from lxl to 1x2 and 1x2 to 1x4.
  • the shading rates in X-direction and the Y-direction may be independent of each other allowing for combinations of 1, 2, and 4 (and others) for each direction.
  • a first shading rate may be determined for pixel 600 at location (x, y) which may have a pixel value.
  • P tex2D( texture, uv ), where "P” is the pixel value with visual signals averaged from surrounding texels neighborhood of size, "texture” is the incoming texture image to be sampled and "uv” is the texture coordinate to sample into the texture image.
  • the pixel value may be determined after bilinear or trilinear filtering.
  • D abs(P-Q). If "D” equals 0, that means size a VRS shading rate of size N x N can be used. This can correspond with the maximum shading rate (e.g., 4x4). As “D” increases, the usable VRS shading rate can decrease (e.g., 3x3, 2x2, lxl, etc.).
  • a second shading rate may be determined based on the first shading rate. For example, given shading rate 4x4 found previously, determine whether 4x2, 2x4, 1x4, or 4x1 would be better choices.
  • Transparent graphic framework 104 will use 2x4 or 1x4 for medium or large values of "S.” The larger the value of "T,” the larger the change in the vertical direction.
  • Transparent graphic framework 104 will use 4x2 or 4x1 for medium or large values of T.
  • FIG. 7 illustrates an example iterative process performed by a computing component 700 for providing stereoscopic rendering.
  • Computing component 700 may be, for example, a server computer, a controller, or any other similar computing component capable of processing data.
  • the computing component 700 includes a hardware processor 702, and machine-readable storage medium 704.
  • computing component 700 may be an embodiment of a system corresponding with advanced rendering system 102 of FIG. 1.
  • Hardware processor 702 may be one or more central processing units (CPUs), semiconductor-based microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium 704. Hardware processor 702 may fetch, decode, and execute instructions, such as instructions 710-650, to control processes or operations for optimizing the system during run-time. As an alternative or in addition to retrieving and executing instructions, hardware processor 702 may include one or more electronic circuits that include electronic components for performing the functionality of one or more instructions, such as a field programmable gate array (FPGA), application specific integrated circuit (ASIC), or other electronic circuits.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • a machine-readable storage medium such as machine-readable storage medium 704, may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions.
  • machine-readable storage medium 704 may be, for example, Random Access Memory (RAM), non-volatile RAM (NVRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, and the like.
  • RAM Random Access Memory
  • NVRAM non-volatile RAM
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • machine-readable storage medium 704 may be a non- transitory storage medium, where the term "non-transitory" does not encompass transitory propagating signals.
  • machine-readable storage medium 704 may be encoded with executable instructions, for example, instructions 710-650.
  • Hardware processor 702 may execute instruction 710 to intercept an API call from an application.
  • Hardware processor 702 may execute instruction 720 to parse the API call to determine intrinsic or extrinsic parameters of a digital image.
  • Hardware processor 702 may execute instruction 730 to determine a detail value of one or more intrinsic or extrinsic parameters of a digital image.
  • Hardware processor 702 may execute instruction 740 to adjust the detail value to a second detail value when the detail value exceeds a threshold value.
  • Hardware processor 702 may execute instruction 750 to generate an output message with the second detail value to alter the rendered digital image.
  • FIG. 8 depicts a block diagram of an example computer system 800 in which various of the embodiments described herein may be implemented.
  • the computer system 800 includes a bus 802 or other communication mechanism for communicating information, one or more hardware processors 804 coupled with bus 802 for processing information.
  • Hardware processor(s) 804 may be, for example, one or more general purpose microprocessors.
  • the computer system 800 also includes a main memory 806, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 802 for storing information and instructions to be executed by processor 804.
  • Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 804.
  • Such instructions when stored in storage media accessible to processor 804, render computer system 800 into a specialpurpose machine that is customized to perform the operations specified in the instructions.
  • the computer system 800 further includes a read only memory (ROM) 808 or other static storage device coupled to bus 802 for storing static information and instructions for processor 804.
  • ROM read only memory
  • a storage device 810 such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 802 for storing information and instructions.
  • the computer system 800 may be coupled via bus 802 to a display 812, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user.
  • a display 812 such as a liquid crystal display (LCD) (or touch screen)
  • An input device 814 is coupled to bus 802 for communicating information and command selections to processor 804.
  • cursor control 816 is Another type of user input device
  • cursor control 816 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812.
  • the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
  • the computing system 800 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s).
  • This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • the word “component,” “engine,” “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++.
  • a software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts.
  • Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution).
  • a computer readable medium such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution).
  • Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device.
  • Software instructions may be embedded in firmware, such as an EPROM.
  • hardware components may be comprised of connected logic units, such as gates and flipflops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
  • the computer system 800 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 800 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 800 in response to processor(s) 804 executing one or more sequences of one or more instructions contained in main memory 806. Such instructions may be read into main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in main memory 806 causes processor(s) 804 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • non-transitory media refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 810.
  • Volatile media includes dynamic memory, such as main memory 806.
  • non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
  • Non-transitory media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between non-transitory media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802.
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • the computer system 800 also includes an interface 818 (e.g., communication interface or network interface) coupled to bus 802.
  • Interface 818 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks.
  • interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN).
  • LAN local area network
  • Wireless links may also be implemented.
  • interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • a network link typically provides data communication through one or more networks to other data devices.
  • a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP).
  • ISP Internet Service Provider
  • the ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet.”
  • Internet Internet
  • Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link and through interface 818, which carry the digital data to and from computer system 800, are example forms of transmission media.
  • the computer system 800 can send messages and receive data, including program code, through the network(s), network link and interface 818.
  • a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the interface 818.
  • the received code may be executed by processor 804 as it is received, and/or stored in storage device 810, or other non-volatile storage for later execution.
  • Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware.
  • the one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service” (SaaS).
  • SaaS software as a service
  • the processes and algorithms may be implemented partially or wholly in application-specific circuitry.
  • the various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations.
  • a circuit might be implemented utilizing any form of hardware, software, or a combination thereof.
  • processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit.
  • the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality.
  • a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 800.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Image Generation (AREA)

Abstract

Systems and methods are provided for an advanced rendering system to improve image rendering. For example, the disclosed technology can intercept an image rendering function call from a mobile gaming application at a mobile device to an application programming interface (API) internal to the mobile device, and compute image values (e.g., shading rate, etc.) that quantify a visually acceptable image quality loss in a displayed image at the mobile device.

Description

NON-INVASIVE GRAPHICS ACCELERATION VIA DIRECTIONAL PIXEL/SCREEN DECIMATION
Technical Field
[0001] The disclosed technology relates generally to improved image rendering. Particularly, the disclosed technology relates to intercepting image rendering function calls from a mobile gaming application at a mobile device to an application programming interface (API) internal to the mobile device, and computing image values (e.g., shading rate, etc.) that quantify a visually acceptable image quality loss.
Description of Related Art
[0002] Mobile gaming requires intensive computing resources at a mobile device to run the gaming application. The backend computing system that generates the gaming application also requires massive computational power to generate and produce the game, especially in comparison to the mobile device. Additionally, these mobile games are increasingly being developed with three-dimensional (3D) technology instead of two- dimensional (2D), which is typically much more resource intensive with additional complexities for capturing multiple points of view and data.
[0003] As an example, the objects in the mobile games often have vastly different texture patterns, ranging from the subtle gradient of a blue sky to the sharply chromatic patterns of a butterfly's wings. Yet each of these images require the same amount of pixels. As such, the backend computing system (e.g., GPU processor, etc.) can spend an equal amount of computational processing power to generate either object. Improvements to processing power and power consumption can be made without losing the detail that mobile gaming users look for in these gaming applications. Brief Description of the Drawings
[0004] The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.
[0005] FIG. 1 provides illustrations of an advanced rendering system on a mobile device, in accordance with some embodiments of the application.
[0006] FIG. 2 provides an illustrative payload of an API call with intrinsic and/or extrinsic parameters, in accordance with some embodiments of the disclosure.
[0007] FIG. 3 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
[0008] FIG. 4 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
[0009] FIG. 5 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application.
[0010] FIG. 6 illustrates one or more shading rate processes for an illustrative pixel, in accordance with embodiments of the application.
[0011] FIG. 7 illustrates a computing component for providing advanced rendering, in accordance with embodiments of the application.
[0012] FIG. 8 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.
[0013] These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there.
Detailed Description
[0014] In various mobile graphics applications, including mobile gaming applications, video, and other types of computer graphics applications, the number of polygons and vertices have an approximate upper bound of the physical size of the screen at the mobile device. As such, when the mobile games are created, the mobile device can approximate the amount of data that is received or processed to generate these polygons and vertices in order to display images and otherwise execute the application.
[0015] In addition to polygon data, the pixels that display the image data usually do not exceed a data value that provides the pixel definition and image quality. As an example, a point cloud may be generated of the same geometry (e.g., the point cloud corresponding with a 3D scanning process of a physical environment that creates digital data points of the 3D space that exist along the x, y, and z vertices).
[0016] Shading data for traditional image rendering may be extensive and detailed. For example, a vertex shader in the application may implement geometric transformations to a screen space, a tangent space, etc. to generate large amounts of shading data corresponding to each of the images. On the other hand, in the pixel shader stage, while there is a similar upper bound where the human eye can no longer discern a finer resolution, increasing the screen resolution to approach a theoretical limit of the resolution can equal massive (e.g., quadratic) growth of the number of pixels to be shaded. This corresponds with adding more data without have the benefit of an improved user experience or improved graphic display.
[0017] Moreover, shading can be continuously more complex and require large amounts of processing power to generate increasingly large amounts of data. There is no theoretical upper bound in the pixel shader complexity. This may correlate with more detailed lighting from physical environments depicted in the mobile gaming application. In some examples, lighting data may be somewhat infinite and very resource intensive (e.g., based on lighting sources like direct light originating from single source, like a lamp or the sun, light bouncing off metallic surfaces, etc.).
[0018] To improve the image rendering and presentation of the images provided by graphics applications, some traditional systems implement pixel reduction or removal that can have some improvement on the capability of the processor. For example, traditional systems may mitigate these problems with additional rendering passes that perform extra calculations. For instance, a "pre-z" or "depth only pass" may be employed to calculate front- most geometry that is not hidden. Multiple post-processing passes can be implemented to render geometry with low frequency textures into a lower resolution screen. After the postprocessing passes, the rendered geometry may be up-scaled (e.g., by implementing a blur pass filter and providing the altered images to a higher resolution screen, etc.).
[0019] There are also a class of traditional techniques that reduce pixel shader computations. For example, shading processes can include lookup tables, transferring calculations from a pixel shader to a vertex shader, and the like. The shading processes may be designed for specific scenarios and no one recipe can solve the problem in general.
[0020] In another example, traditional systems may implement Multi-Sampling Anti- Aliasing (MSAA) or Variable Rate Shading (VRS). MSAA can reduce jagged edges without having to resort to super sampling. The MSAA technique may be limited to components of the image (e.g., edges, etc.). The VRS technique can expand on MSAA and allow finer control over the surface of each geometry. However, VRS may not be enabled to adapt to the plethora of geometric conditions in a scene.
[0021] MSAA and VRS are techniques that can reduce the amount of processed pixels by coverage sampling (e.g., MSAA) or explicit overwrite of shading rate (e.g., VRS). For example, a graphics developer can create the computer-implemented instructions to render each geometry at lower resolution than a default value. In specific situations, such as for geometry with low frequency textures, the reduced resolution may not yield noticeable quality reduction, yet reduces the amount of processing required to implement the pixel shader steps.
[0022] However, MSAA and VRS have limitations. MSAA may solve the problem of converting sharp and stair pixel patterns due to low resolution by turning the sharp stair patterns into smooth edges. This process does not really increase the resolution of the image, but rather implements some visually acceptable shortcuts to image processing at a slightly more memory cost. VRS may solve the problem of a full resolution image taking too long to draw at the user interface. VRS works by drawing an area on screen quickly with a half to a quarter resolution (e.g., like 1080p in a television environment) instead of slowly with full resolution (e.g., like 2K in a television environment), and then upscale the half resolution to a quarter resolution in the full display area while preserving the general look of that area. This can be detrimental to the image, since there can be some visual degradations that can be controllable by implementing threshold values.
[0023] Aside from MSAA and VRS, there are also traditional practices of offloading computations from pixel shaders to vertex shaders (or other components) to other network devices, which can alter the processing requirements so that they can be handled by other components of the system that are less busy. However, when the vertex shader is processing pixel shading instructions, image artifacts may be introduced, which lower the quality of the image. This movement of the processing to another device can be especially noticeable in gaming applications where image processing and action may be quick and intertwined. The artifacts may be created based on interpolation methods or may be based on other locally concentrated phenomena (e.g., specular highlight lost, etc.). This may be mitigated by increasing vertices count via tessellation (e.g., geometry shaders, etc.), although the effectiveness of increasing image quality without creating a significant amount of additional computational processing may not be easy to achieve by the mobile device running the gaming application.
[0024] In some examples, multiple rendering passes may be implemented. For example, they can render to a half resolution screen and then blur to full resolution screen. This may cause a noticeable reduction in image quality and may not be effective if the scene contains high-frequency textures or pixels. It may also limit multiple render targets (e.g., polygons or pixels to render for image processing) that are displayed. These render targets may be moved off screen with differing resolutions, which can be computationally inefficient and inconvenient. The multiple targets may also require extensive customization from a software developer to design or select a technique most suitable for the particular scene complexity and characteristics being worked on. [0025] Better, more coherent methods of image rendering for mobile gaming applications are needed.
[0026] As described herein, embodiments of the application can intercept one or more application programming interface (API) calls to determine intrinsic or extrinsic parameters for rendering a digital image on a mobile device (e.g., user device, laptop, mobile phone, etc.). The intrinsic parameters may include, for example, span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity, material anisotropy, and the like. The extrinsic parameters may include, for example, dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size, density in screen space, and the like. The data of the image may be analyzed to determine a detail value of the intrinsic or extrinsic parameters for rendering a digital image. When the detail value exceeds a threshold value, the system may optimize rendering of the original intrinsic or extrinsic parameter by adjusting the parameter. Otherwise, the intrinsic or extrinsic parameter may remain the same. This new intrinsic or extrinsic parameter can be determined in preprocessing to improve the user experience and add functionality to the mobile application, which was not originally provided by the software developer or included in the original software application.
[0027] Technical improvements are realized throughout the application. For example, the image processing at the mobile device can be adjusted from the original resolution value to the lower resolution, without rendering the images originally provided by the user application. For example, the images that are far away, very detailed, and/or not as noticeable may be blurred while the image quality of the objects that are closer to the point of view of the user may remain untouched. This may improve data processing for applications at the mobile device, improving rendering and processing for mobile gaming and other processes, without drastically affecting the noticeable image quality of the mobile gaming application. [0028] Other technical improvements are realized as well. For example, the gaming programmer that creates the mobile gaming application does not need to incorporate these computational processing improvements with the gaming application, since they may be incorporated by benevolently hijacking an API call by the complete gaming application to the operating system, as described herein.
[0029] FIG. 1 provides illustrations of an advanced rendering system, in accordance with some embodiments of the application. An illustrative mobile device 100 is provided. Advanced rendering system 102 is installed on mobile device 100 among its computational layers, including application 110, graphics application 112, hardware user interface (HWUI) 114, driver(s) 116, and operating system 118. For example, application 110, graphics application 112, HWUI 114, driver(s) 116, and operating system 118 may be embedded with mobile device 100 to generate a mobile gaming environment, and can be implemented in different environments using features described throughout the disclosure.
[0030] Application 110 may comprise a software application executed using computer executable instructions on mobile device 100. Application 110 may interact with the user interface of mobile device 100 to display received information to the user. In some examples, application 110 may include an electronic game or other software application that is operated by mobile device 100 to provide images via the display of mobile device 100.
[0031] Graphics application 112 may comprise a computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics (e.g., OpenGL for Embedded Systems, OpenGL ES, GLES, etc.). Graphics application 112 may be designed for embedded systems like smartphones, video game consoles, mobile phones, or other user devices.
[0032] HWUI 114 may include a library that enables user interface (Ul) components to be accelerated using the processor (e.g., GPU, CPU, etc.). HWUI 114 may correspond with an accelerated rendering pipeline for images and other data. In some mobile device 100 models (e.g., non-Android® models, etc.), HWUI 114 may be removed without diverting from the essence of the disclosure. [0033] Driver(s) 116 may comprise a computer program that operates or controls the processor by providing a software interface to a hardware device. Driver(s) 116 may enable operating system 118 of mobile device 100 to access hardware functions without encoding precise details about the hardware being used.
[0034] The processor may include a specialized hardware engine to execute machine readable instructions and perform methods described throughout the disclosure. In some examples, the processor corresponds with a Graphics Processing Unit (GPU) to accelerate graphics operations and/or perform parallel, graphics operations. Other processing units may be implemented without diverting from the scope of the application (e.g., central processing unit (CPU), etc.).
[0035] Advanced rendering system 102 can be provided as an interception layer between mobile game related libraries and drivers at graphics application 112. Advanced rendering system 102 may be invoked automatically and/or when an interface tool is activated (e.g., selecting "activate" or providing a predetermined visual effect from the Ul in game assistant, etc.).
[0036] In some examples, when advanced rendering system 102 is invoked or enabled, an application programming interface (API) call from graphics application 112 to the processor (or some other hardware in the system) may be intercepted and a customized graphics output may be provided to operating system 118 in its place. Through this interception, advanced rendering system 102 may modify normal layer behavior of graphics application 112 (e.g., modify OpenGL by using Android® 10+ GLES Layers system, etc.).
[0037] Once the graphics layer 112 is modified, advanced rendering system 102 may recompile it to use with application 110. For example, the image detail effect may reduce the resolution of an object to create an object that requires less data, which is transmitted an OpenGL application programming interface (API) to create a rendered image in application 110. Advanced rendering system 102 may be installed on mobile device 100 as a transparent graphic framework. [0038] The interception layer may correspond with a pre-processing mechanism that does not depend on the game engine. Advanced rendering system 102 may reduce an image quality of the images using a variety of methods discussed herein.
[0039] Various system properties may be altered as well. For example, the "debug. gles. layers" system property may be changed to reference a parameter associated with advanced rendering system 102. This parameter may redirect processing from the predefined application to advanced rendering system 102. This may effectively cause application 110 to call a specific OpenGL wrapper of advanced rendering system 102 instead of the default implementation. Once advanced rendering system 102 provides the parameter and redefined API calls, the application may forward the processing back to the default implementation of OpenGL.
[0040] Transparent graphic framework 104 may correspond with a software framework embedded at the operating system layer 118 in mobile device 100. Transparent graphic framework 104 may benevolently hijack the rendering pipeline used by application 110. Application 110 may operate normally and may have no knowledge of transparent graphic framework 104, nor would the logic or binary data of application 110 be modified.
[0041] Application 110 may generate and transmit API calls that comprise one or more texture parameters, matrices, and other object definitions that can be observed by transparent graphic framework 104. Transparent graphic framework 104 may modify the parameters and/or generate additional API calls to be executed to add additional benefit to the overall system.
[0042] Transparent graphic framework 104 may include an effect detail engine 106, graphic API 107, and object adjustment engine 108.
[0043] Effect detail engine 106 may receive the payload values from the API call. The payload values may define parameters of one or more objects stored with application 110. The payload values may include intrinsic and/or extrinsic parameters, including lighting (e.g., shades, shadows, directional light, etc.), distance, image perspectives, materials, textures, or other image wrappers and libraries, as illustrated with FIG. 2. [0044] FIG. 2 provides an illustrative payload of an API call with intrinsic and/or extrinsic parameters, in accordance with some embodiments of the disclosure. API call 200 may be transmitted from application 110 to operating system 118 as part of pre-processing. API call 200 may comprise intrinsic parameters 210 and/or extrinsic parameters 220.
[0045] Intrinsic parameters 210 may correspond with internal parameters related to an object or camera capturing an image of the object. Examples of intrinsic parameters may include, for example, focal length, lens distortion, span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity and/or anisotropy, and the like.
[0046] Extrinsic parameters 220 may correspond with external parameters of the object or camera that are used to describe the transformation between the camera and its external world. Examples of extrinsic parameters may include, for example, dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size and density in screen space, and the like.
[0047] In some examples, intrinsic parameters 210 and/or extrinsic parameters 220 may be captured by transparent graphic framework 104, wrapped with adjusted information and provided to operating system 118 as output 230. Output 230 may include updated parameters to change the images rendered by application 110. Output 230 from transparent graphic framework 104 may be provided to the display screen of mobile device 100. The display screen may present the rendered images at mobile device 100.
[0048] In some examples, the goal of adjusting intrinsic parameters 210 and/or extrinsic parameters 220 is to reduce pixel computations without significantly reducing the quality of rendered graphics. For example, an albedo texture map from intrinsic parameters 210 may be a contributor to an image's appearance (e.g., selected as the rendering function to alter the image). Adjustment of the albedo texture map may adjust the corresponding geometry of the object. In another example, distance and orientation of the geometry as viewed by the camera from extrinsic parameters 220 may significantly affect the texture pattern of the object. [0049] In some examples, intrinsic parameters 210 and/or extrinsic parameters 220 may affect texture maps. Texture maps may include intrinsic parameters 210, for example, including normal, metallic, roughness, etc. textures of a particular object. These intrinsic parameters 210 may be affected by lighting directions as well as one or more view positions (extrinsic parameters 220). In some examples, a set of lighting directions can be an additional dimension in the permutation of input data.
[0050] With changes to the intrinsic parameters 210 and/or extrinsic parameters 220, rendering during the pre-processing phase may also consider the effect on other image attributes. For example, the final rendering may adjust distances, orientations, and the like. In some examples, each geometric object in a scene may be identified. Transparent graphic framework 104 may render the identified objects, rotate the objects every thirty-degrees for each dimension of the Euler angles, and store each of these pre-renderings in object data store 105. The pre-renderings may be discrete samplings of possible orientations of each object in a virtual scene.
[0051] Other pre-renderings may be completed as well. For example, transparent graphic framework 104 may render the object, vary the position of the object relative to the camera in a virtual scene (e.g., in discrete samplings of the position of the object, etc.), and store each of these pre-renderings in object data store 105. For any orientation, the default rendering of this object may fill the pixels contained by the silhouette of the object with one or more sampled texels (e.g., an array of data in texture space, etc.). In some examples, these texels may correspond with a particular mipmap level corresponding to the texture, according to one or more internal screen space derivatives.
[0052] As part of pre-calculation, each object orientation may be rendered twice, including a first time with standard shading computation and a second time with textures (or other intrinsic parameters 210). The textures may correspond with a sampling forced to the first mipmap level (e.g., the most high frequency level).
[0053] Transparent graphic framework 104 may determine the per pixel differences between these two rendered images correspond to various quality changes. For example, if the detail values (e.g., pixel) at a pixel location from these two images differ by a first threshold value, then a shading rate may match the higher mipmap level. The first threshold value may be zero (e.g., the two images are exactly the same) or a predetermined value (e.g., less than 1%) or may have a statistical basis (e.g., a "normal" apple is about the size of a human fist, whereas a "large" or "very large" apple would need to be larger by a threshold value than the size of a fist). On the other hand, if the detail values from these two images differ by a second threshold value, then a shading rate may match the lower mipmap level. The second threshold value be a predetermined value (e.g., greater than 50%) or may have a statistical basis (e.g., a normal apple is about the size of a human fist, whereas a "large" or "very large" apple would need to be larger by a threshold value than the size of a fist).
[0054] Transparent graphic framework 104 may determine the shading rate for a particular pixel based on this comparison process (e.g., the higher mipmap level for matching renderings and a lower mipmap level for renderings that are very different, etc.). As an illustrative example, the higher mipmap level of a pixel may correspond with a 4x4 rendering and the lower mipmap level of a pixel may correspond with a lxl rendering.
[0055] The higher mipmap level of a pixel (e.g., corresponding with a 4x4 rendering) may correspond with a lower shading rate. This may allow the shading rate to remain uniform. For non-uniform shading rates (e.g., 1x4 and 4x1, etc.), additional information may be received and analyzed, like the screen space rate of change at each pixel. The additional information may be included in the API call 200. The analysis of the additional information may be accomplished via derivative calculations (e.g., ddx() ddy() instructions, etc.) to determine a directionally dependent rate of change of the shading rate (e.g., detail value), which can further determine if 4x1 or 1x4 shading rates are more valid than 4x4.
[0056] The comparison may be implemented with all pixels of the image to determine sensible shading rates for all pixels. The information may be stored in a data store associated with transparent graphic framework 104. In some examples, histograms of shading rates may be generated. The histogram may be used to select the shading rate with the most samples. [0057] Lighting adjustments of the object may also be determined and stored in object data store 105. For example, while lighting may drastically change the appearance of an object, there many mobile graphics applications that do not include object lighting or shadowing by design in order to conserve computation. As such, lighting (as an intrinsic parameter 210) may be ignored. In some examples, when lighting is used in application 110 and thus included as an intrinsic parameter 210, transparent graphic framework 104 may either defer back to the default lxl rendering case or adopt lighting directions as an additional parameter when pre-processing.
[0058] The pre-calculated sets of quality loss percentages may be retrieved at runtime. For example, at runtime, it could be straight-forward for the system to look up a shading rate given intrinsic parameters 210 and/or extrinsic parameters 220. In some examples, the pre-calculated set of object data may be simplified and the rendering can interpolate from the nearest data points around a given set of intrinsic parameters 210 and/or extrinsic parameters 220.
[0059] Quality degradation may be acceptable if computer processing requirements are reduced and the degradation of the image is further defined in both horizontal and vertical screen space directions. The quantified difference may be input to the variable rate shading APIs to produce one or more directional varying rendered resolutions. The quality degradation may be acceptable because of the limitation in sight of the perceived image by a typical human observer.
[0060] Returning to FIG. 1, transparent graphics layer 104 also comprises graphic API 107. Graphic API 107 may determine parameters of one or more objects stored with application 110. For example, when the parameter is not defined by application 110, graphic API 107 may determine the parameter value and correlate the new parameter value with the object. The parameter value may be stored with a unique identifier for the object in object data store 105. [0061] Object adjustment engine 108 may update one or more parameters associated with the intercepted API call and provide the output to operating system 118 to provide to the display screen of mobile device 100.
[0062] FIG. 3 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application. For example, in a mobile gaming application, a monochromatic pipe object 300 (illustrated as object 300A and 300B) is rotated so one of its ends extends into the distance. Illustration 310 shows the object 300A maintaining the shading and detail as the pipe moves visually farther from the user in the display screen. Illustration 320 shows the object 300B losing some of the shading and detail as it moves farther away, in accordance with the image adjustments made by transparent graphic framework 104. This is an example of the geometry of the object 300 (intrinsic parameters 210) viewed in a particular orientation (extrinsic parameters 220). In illustration 320, the derivative calculations may have increasingly large ddx() and ddy() values as the object 300B moves farther away. Since its texture is monochromatic, the shading rates would remain uniform and be as high as 2x2 or 4x4.
[0063] FIG. 4 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application. For example, in the mobile gaming application, a black and white soccer ball object 400 (illustrated as object 400A and 400B) is provided in initial illustration 410. In illustration 420, the derivative calculations may have increasingly large ddx() and ddy() values towards the contour. The mipmap pixel value differences may be small, except at where the black and white parts meet. Since the renderings essentially match, a 4x4 shading rate is reasonable in this example.
[0064] In this example, the main intrinsic parameter 210 may correspond with texture. For example, transparent graphic framework 104 may determine the shading rate for a particular pixel of the soccer ball object 400 based on the comparison process described herein (e.g., the higher mipmap level for matching renderings and a lower mipmap level for renderings that are very different, etc.). The higher mipmap level of a pixel may correspond with a 4x4 rendering and the lower mipmap level of a pixel may correspond with a lxl rendering.
[0065] FIG. 5 provides an illustrative image adjusted by the advanced rendering system, in accordance with embodiments of the application. For example, in the mobile gaming application, a black and white zebra is provided in initial illustration 510. A side view of the zebra may identify a texture pattern (e.g., intrinsic parameter 210). Due to the vertical striping pattern, the ddx() value likely is much larger than ddy() for most pixels. Combined with the mipmap comparison, transparent graphic framework 104 may determine a non- uniform shading rate (e.g., 1x2 or 1x4, etc.).
[0066] FIG. 6 illustrates one or more shading rate processes for an illustrative pixel, in accordance with embodiments of the application. In this illustration, transparent graphic framework 104 may determine a first shading rate for pixel 600 (e.g., 4x4, 3x3, 2x2, or lxl) and then determine if a second shading rate for pixel 600 is a better choice (e.g., 4x2, 2x4, 1x4, or 4x1).
[0067] In some examples, thresholds in which a shading rate changes are not constant. For example, the shading rate changes in the X-direction from lxl to 2x1 or 2x1 to 4x1. In another example, the shading rate changes in the Y-direction from lxl to 1x2 and 1x2 to 1x4. The shading rates in X-direction and the Y-direction may be independent of each other allowing for combinations of 1, 2, and 4 (and others) for each direction.
[0068] For example, a first shading rate may be determined for pixel 600 at location (x, y) which may have a pixel value. P = tex2D( texture, uv ), where "P" is the pixel value with visual signals averaged from surrounding texels neighborhood of size, "texture" is the incoming texture image to be sampled and "uv" is the texture coordinate to sample into the texture image. In some examples, the pixel value may be determined after bilinear or trilinear filtering.
[0069] An alternative process may be used to determine the first shading rate. For example, the first shading rate may be determined for pixel 600 at location (x, y) using int mipmapjevel = 0; Q = tex2DLod( texture, float4( uv, 0, mipmapjevel ). "Q" may correspond with the highest frequency, "P" may correspond with the visual signals averaged from surrounding texels neighborhood of size, and "N" may correspond with a lower frequency. Let D = abs(P-Q). If "D" equals 0, that means size a VRS shading rate of size N x N can be used. This can correspond with the maximum shading rate (e.g., 4x4). As "D" increases, the usable VRS shading rate can decrease (e.g., 3x3, 2x2, lxl, etc.).
[0070] A second shading rate may be determined based on the first shading rate. For example, given shading rate 4x4 found previously, determine whether 4x2, 2x4, 1x4, or 4x1 would be better choices. Using "S" and "T" to represent how the object changes in the horizontal and vertical directions, S = ddx(P) and T = ddy(P). The larger the value of S, the larger the change in the horizontal direction. Transparent graphic framework 104 will use 2x4 or 1x4 for medium or large values of "S." The larger the value of "T," the larger the change in the vertical direction. Transparent graphic framework 104 will use 4x2 or 4x1 for medium or large values of T.
[0071] FIG. 7 illustrates an example iterative process performed by a computing component 700 for providing stereoscopic rendering. Computing component 700 may be, for example, a server computer, a controller, or any other similar computing component capable of processing data. In the example implementation of FIG. 7, the computing component 700 includes a hardware processor 702, and machine-readable storage medium 704. In some embodiments, computing component 700 may be an embodiment of a system corresponding with advanced rendering system 102 of FIG. 1.
[0072] Hardware processor 702 may be one or more central processing units (CPUs), semiconductor-based microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium 704. Hardware processor 702 may fetch, decode, and execute instructions, such as instructions 710-650, to control processes or operations for optimizing the system during run-time. As an alternative or in addition to retrieving and executing instructions, hardware processor 702 may include one or more electronic circuits that include electronic components for performing the functionality of one or more instructions, such as a field programmable gate array (FPGA), application specific integrated circuit (ASIC), or other electronic circuits.
[0073] A machine-readable storage medium, such as machine-readable storage medium 704, may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, machine-readable storage medium 704 may be, for example, Random Access Memory (RAM), non-volatile RAM (NVRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, and the like. In some embodiments, machine-readable storage medium 704 may be a non- transitory storage medium, where the term "non-transitory" does not encompass transitory propagating signals. As described in detail below, machine-readable storage medium 704 may be encoded with executable instructions, for example, instructions 710-650.
[0074] Hardware processor 702 may execute instruction 710 to intercept an API call from an application.
[0075] Hardware processor 702 may execute instruction 720 to parse the API call to determine intrinsic or extrinsic parameters of a digital image.
[0076] Hardware processor 702 may execute instruction 730 to determine a detail value of one or more intrinsic or extrinsic parameters of a digital image.
[0077] Hardware processor 702 may execute instruction 740 to adjust the detail value to a second detail value when the detail value exceeds a threshold value.
[0078] Hardware processor 702 may execute instruction 750 to generate an output message with the second detail value to alter the rendered digital image.
[0079] FIG. 8 depicts a block diagram of an example computer system 800 in which various of the embodiments described herein may be implemented. The computer system 800 includes a bus 802 or other communication mechanism for communicating information, one or more hardware processors 804 coupled with bus 802 for processing information. Hardware processor(s) 804 may be, for example, one or more general purpose microprocessors. [0080] The computer system 800 also includes a main memory 806, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 802 for storing information and instructions to be executed by processor 804. Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 804. Such instructions, when stored in storage media accessible to processor 804, render computer system 800 into a specialpurpose machine that is customized to perform the operations specified in the instructions.
[0081] The computer system 800 further includes a read only memory (ROM) 808 or other static storage device coupled to bus 802 for storing static information and instructions for processor 804. A storage device 810, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 802 for storing information and instructions.
[0082] The computer system 800 may be coupled via bus 802 to a display 812, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 814, including alphanumeric and other keys, is coupled to bus 802 for communicating information and command selections to processor 804. Another type of user input device is cursor control 816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
[0083] The computing system 800 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. [0084] In general, the word "component," "engine," "system," "database," data store," and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flipflops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
[0085] The computer system 800 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 800 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 800 in response to processor(s) 804 executing one or more sequences of one or more instructions contained in main memory 806. Such instructions may be read into main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in main memory 806 causes processor(s) 804 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
[0086] The term "non-transitory media," and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 810. Volatile media includes dynamic memory, such as main memory 806. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
[0087] Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
[0088] The computer system 800 also includes an interface 818 (e.g., communication interface or network interface) coupled to bus 802. Interface 818 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0089] A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet." Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through interface 818, which carry the digital data to and from computer system 800, are example forms of transmission media.
[0090] The computer system 800 can send messages and receive data, including program code, through the network(s), network link and interface 818. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the interface 818.
[0091] The received code may be executed by processor 804 as it is received, and/or stored in storage device 810, or other non-volatile storage for later execution.
[0092] Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.
[0093] As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 800.
[0094] As used herein, the term "or" may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, "can," "could," "might," or "may," unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.
[0095] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as "conventional," "traditional," "normal," "standard," "known," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.

Claims

Claims What is claimed is:
1. A computer-implemented method comprising: intercepting, by a modified OpenGL pipeline of a transparent graphic framework embedded with a mobile device, an application programming interface (API) call from a software application installed on the mobile device; parsing the API call to determine one or more intrinsic or extrinsic parameters for rendering a digital image on the mobile device; determining a detail value of the one or more intrinsic or extrinsic parameters; when the detail value exceeds a threshold value, adjusting the detail value to a second detail value; and generating an output message that includes the second detail value, wherein the second detail value alters the rendered digital image presented on a display of the mobile device.
2. The computer implemented method of claim 1, wherein the intrinsic parameters include at least one of span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity and anisotropy.
3. The computer implemented method of claim 1, wherein the extrinsic parameters include dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size, and density in screen space.
4. The computer-implemented method of claim 1, wherein the software application is a mobile gaming application.
- 24 -
5. The computer-implemented method of claim 1, wherein the threshold value is zero, the detail value is a uniform shading rate and adjusting the detail value to a higher mipmap level of a pixel in the digital image on the mobile device.
6. The computer-implemented method of claim 1, wherein the threshold value greater than 50%, the detail value is a uniform shading rate and adjusting the detail value to a lower mipmap level of a pixel in the digital image on the mobile device.
7. The computer-implemented method of claim 1, wherein the detail value is a non- uniform shading rate and the method further comprises: initiating a derivative calculation to determine a directionally dependent rate of change of the detail value.
8. The computer-implemented method of claim 1, wherein the detail value is a first shading rate and the method further comprises: determining a second shading rate based on the first shading rate, wherein the second shading rate corresponds with an object in the image changing at a horizontal or vertical direction.
9. A computer system for generating a three-dimensional (3D) rendered image comprising: a memory; and one or more processors that are configured to execute machine readable instructions stored in the memory for performing the method comprising: intercepting, by a modified OpenGL pipeline of a transparent graphic framework embedded with a mobile device, an application programming interface (API) call from a software application installed on the mobile device; parsing the API call to determine one or more intrinsic or extrinsic parameters for rendering a digital image on the mobile device; determining a detail value of the one or more intrinsic or extrinsic parameters; when the detail value exceeds a threshold value, adjusting the detail value to a second detail value; and generating an output message that includes the second detail value, wherein the second detail value alters the rendered digital image presented on a display of the mobile device.
10. The computer system of claim 9, wherein the intrinsic parameters include at least one of span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity and anisotropy.
11. The computer system of claim 9, wherein the extrinsic parameters include dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size, and density in screen space.
12. The computer system of claim 9, wherein the software application is a mobile gaming application.
13. The computer system of claim 9, wherein the threshold value is zero, the detail value is a uniform shading rate and adjusting the detail value to a higher mipmap level of a pixel in the digital image on the mobile device.
14. The computer system of claim 9, wherein the threshold value greater than 50%, the detail value is a uniform shading rate and adjusting the detail value to a lower mipmap level of a pixel in the digital image on the mobile device.
15. The computer system of claim 9, wherein the detail value is a non-uniform shading rate and the method further comprises: initiating a derivative calculation to determine a directionally dependent rate of change of the detail value.
16. The computer system of claim 9, wherein the detail value is a first shading rate and the method further comprises: determining a second shading rate based on the first shading rate, wherein the second shading rate corresponds with an object in the image changing at a horizontal or vertical direction.
17. A non-transitory computer-readable storage medium storing a plurality of instructions executable by one or more processors, the plurality of instructions when executed by the one or more processors cause the one or more processors to: intercept, by a modified OpenGL pipeline of a transparent graphic framework embedded with a mobile device, an application programming interface (API) call from a software application installed on the mobile device; parse the API call to determine one or more intrinsic or extrinsic parameters for rendering a digital image on the mobile device; determine a detail value of the one or more intrinsic or extrinsic parameters; when the detail value exceeds a threshold value, adjust the detail value to a second detail value; and generate an output message that includes the second detail value, wherein the
- 27 - second detail value alters the rendered digital image presented on a display of the mobile device.
18. The non-transitory computer-readable storage medium of claim 17, wherein the intrinsic parameters include at least one of span and shape of geometry, dimension of geometry, triangle complexity of geometry, material texture frequency, material specularity and anisotropy.
19. The non-transitory computer-readable storage medium of claim 17, wherein the extrinsic parameters include dimension of geometry in view space, distance of geometry, velocity or rotation as seen in view space, texture pattern density in screen space, triangle size, and density in screen space.
20. The non-transitory computer-readable storage medium of claim 17, wherein the software application is a mobile gaming application.
21. The non-transitory computer-readable storage medium of claim 17, wherein the threshold value is zero, the detail value is a uniform shading rate and adjusting the detail value to a higher mipmap level of a pixel in the digital image on the mobile device.
22. The non-transitory computer-readable storage medium of claim 17, wherein the threshold value greater than 50%, the detail value is a uniform shading rate and adjusting the detail value to a lower mipmap level of a pixel in the digital image on the mobile device.
23. The non-transitory computer-readable storage medium of claim 17, wherein the detail value is a non-uniform shading rate and the one or more processors cause the one or more processors further to:
- 28 - initiate a derivative calculation to determine a directionally dependent rate of change of the detail value.
24. The non-transitory computer-readable storage medium of claim 17, wherein the detail value is a first shading rate and the one or more processors cause the one or more processors further to: determine a second shading rate based on the first shading rate, wherein the second shading rate corresponds with an object in the image changing at a horizontal or vertical direction.
- 29 -
PCT/US2021/064175 2021-12-17 2021-12-17 Non-invasive graphics acceleration via directional pixel/screen decimation WO2022120295A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/064175 WO2022120295A1 (en) 2021-12-17 2021-12-17 Non-invasive graphics acceleration via directional pixel/screen decimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/064175 WO2022120295A1 (en) 2021-12-17 2021-12-17 Non-invasive graphics acceleration via directional pixel/screen decimation

Publications (1)

Publication Number Publication Date
WO2022120295A1 true WO2022120295A1 (en) 2022-06-09

Family

ID=81854545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/064175 WO2022120295A1 (en) 2021-12-17 2021-12-17 Non-invasive graphics acceleration via directional pixel/screen decimation

Country Status (1)

Country Link
WO (1) WO2022120295A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8933948B2 (en) * 2010-10-01 2015-01-13 Apple Inc. Graphics system which utilizes fine grained analysis to determine performance issues
US20210343071A1 (en) * 2016-08-30 2021-11-04 Intel Corporation Multi-resolution deferred shading using texel shaders in computing environments

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8933948B2 (en) * 2010-10-01 2015-01-13 Apple Inc. Graphics system which utilizes fine grained analysis to determine performance issues
US20210343071A1 (en) * 2016-08-30 2021-11-04 Intel Corporation Multi-resolution deferred shading using texel shaders in computing environments

Similar Documents

Publication Publication Date Title
US11875453B2 (en) Decoupled shading pipeline
CN107154063B (en) Method and device for setting shape of image display area
KR101922482B1 (en) Varying effective resolution by screen location by changing active color sample count within multiple render targets
US10164459B2 (en) Selective rasterization
US11657560B2 (en) VRS rate feedback
US10049486B2 (en) Sparse rasterization
US9390541B2 (en) Programmable tile shader
US8970587B2 (en) Five-dimensional occlusion queries
WO2017213764A1 (en) Dynamic low-resolution z test sizes
US20140098096A1 (en) Depth texture data structure for rendering ambient occlusion and method of employment thereof
CN112017101A (en) Variable rasterization ratio
US9959643B2 (en) Variable rasterization order for motion blur and depth of field
US9183652B2 (en) Variable rasterization order for motion blur and depth of field
US8872827B2 (en) Shadow softening graphics processing unit and method
WO2022120295A1 (en) Non-invasive graphics acceleration via directional pixel/screen decimation
US20230298212A1 (en) Locking mechanism for image classification
WO2019042272A2 (en) System and method for multi-view rendering
KR20220130706A (en) hybrid binning
US20230298133A1 (en) Super resolution upscaling
US20240169641A1 (en) Vertex index routing through culling shader for two level primitive batch binning
KR20230020974A (en) Methods and apparatus for order-independent occlusion computations
EP4285330A1 (en) Systems and methods of texture super sampling for low-rate shading
EP4168976A1 (en) Fine grained replay control in binning hardware

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21901623

Country of ref document: EP

Kind code of ref document: A1