CN118135076A - Rendering optimization method based on rasterization difference measurement - Google Patents

Rendering optimization method based on rasterization difference measurement Download PDF

Info

Publication number
CN118135076A
CN118135076A CN202410313698.6A CN202410313698A CN118135076A CN 118135076 A CN118135076 A CN 118135076A CN 202410313698 A CN202410313698 A CN 202410313698A CN 118135076 A CN118135076 A CN 118135076A
Authority
CN
China
Prior art keywords
target
vertex
coordinate
coordinates
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410313698.6A
Other languages
Chinese (zh)
Inventor
温研
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Linzhuo Information Technology Co Ltd
Original Assignee
Beijing Linzhuo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Linzhuo Information Technology Co Ltd filed Critical Beijing Linzhuo Information Technology Co Ltd
Priority to CN202410313698.6A priority Critical patent/CN118135076A/en
Publication of CN118135076A publication Critical patent/CN118135076A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Generation (AREA)

Abstract

The invention discloses a rendering optimization method based on rasterized difference measurement, which acquires vertex coordinates and primitive information through monitoring related interface call of graphic rendering and screen operation, when graphic application completes rendering operation of a certain data frame, determines to execute coordinate transformation operation or monitor output of a vertex shader to obtain rasterized screen space coordinates according to system realization, so as to determine a target boundary frame of a current data frame of the graphic application, and then determines a data frame to be reserved according to the difference between the area of the current target boundary frame and the area of a previous target boundary frame so as to complete screen operation again.

Description

Rendering optimization method based on rasterization difference measurement
Technical Field
The invention belongs to the technical field of computer software development, and particularly relates to a rendering optimization method based on rasterized difference measurement.
Background
The basic process of existing graphics applications, in particular 3D graphics applications, to perform rendering of graphic objects and display them into a display screen is: the graphic application calls the graphic API to render the graphic object to the display memory, and executes the screen-on operation to display the content of the display memory to the display screen after receiving the refresh signal of the display. For 3D graphics applications, the above procedure needs to be implemented by calling a 3D graphics API, and furthermore, the 3D graphics API will implement rasterization of the 3D graphics object, i.e. converting the 3D graphics object into a two-dimensional pixel area. The time interval of the refresh signal of the display depends on the refresh frequency of the display.
The most efficient way to perform this is for the graphics application to just receive the refresh signal sent by the display every time a frame is rendered. However, the refresh rate of the existing mainstream display is 60 frames per second, most 3D graphics applications render several hundred frames per second under the addition of a middle-high-grade GPU, and the difference of processing speeds between the two renders a large amount of rendering data to fail to actually perform the on-screen operation, while rendering data performing the on-screen operation may not be the best data frame, so that not only does this result in a large amount of waste of rendering resources, but also results in display graphics incoherence, and reduces user experience.
Disclosure of Invention
In view of the above, the present invention provides a rendering optimization method based on a rasterized difference metric, which realizes a graphics rendering process with better fluency without modifying the graphics application.
The invention provides a rendering optimization method based on rasterized difference measurement, which specifically comprises the following steps:
Step 1, obtaining vertex coordinates, primitive information, model view matrixes and screen space coordinates related to graphic rendering, storing a mapping relation among a graphic application ID, the vertex coordinates and the primitive information in a vertex primitive mapping table, storing the mapping relation among the graphic application ID, the vertex coordinates and the model view matrixes in the vertex view matrix mapping table, and storing the mapping relation among the graphic application ID, the vertex coordinates and the screen space coordinates in the vertex space coordinate mapping table;
Step 2, when the rendering operation of a certain data frame is monitored to be completed, taking the current data frame as a target frame, acquiring a graphic application ID of the target frame as a target ID, acquiring a last group of vertex coordinates corresponding to the target ID from a vertex graphic element mapping table, marking each vertex as a target vertex coordinate, and marking the group of vertex coordinates as a target vertex coordinate group, wherein the target vertex coordinate group corresponds to the current data frame; obtaining primitive information corresponding to each vertex coordinate in a target vertex coordinate set as a target primitive to form a target primitive set; searching records related to both the target ID and the target vertex coordinate set in the vertex view matrix mapping table and the vertex space coordinate mapping table, executing the step 3 if related records exist in the vertex view matrix mapping table, and executing the step 4 if related records exist in the vertex space coordinate mapping table;
step 3, traversing the target vertex coordinate set, for each target vertex coordinate, acquiring all records related to the target ID and the target vertex coordinate in a vertex view matrix mapping table, taking the last model view matrix as a target model view matrix, and rasterizing the target vertex coordinate into a target screen space coordinate by adopting the target model view matrix; forming a target screen space coordinate set by target screen space coordinates of all target vertex coordinates in the target vertex coordinate set, and executing the step 5;
Step 4, traversing the target vertex coordinate set, and for each target vertex coordinate, acquiring all records related to the target ID and the target vertex coordinate in a vertex space coordinate mapping table, and taking the final screen space coordinate as a target screen space coordinate; forming a target screen space coordinate set by the target screen space coordinates corresponding to all the target vertex coordinates in the target vertex coordinate set, and executing the step 5;
Step 5, traversing the target primitive group and the target screen space coordinate group, and determining the boundary frame of each pair of target primitives and the target screen space coordinate as a target boundary frame; all target boundary frames corresponding to the target primitive group and the target screen space coordinate group form a target boundary frame group, merging operation is carried out on all target boundary frames in the target boundary frame group to obtain a first boundary frame, and the area of the first boundary frame is recorded as the target frame area;
if the current target frame area is smaller than the set threshold value of the existing target frame area of the target ID stored in the data table to be displayed, keeping the data table to be displayed unchanged; otherwise, after deleting the original data in the data table to be displayed, storing the target ID, the target frame and the current target frame area into the data table to be displayed;
And step 6, when receiving the on-screen signal, executing the on-screen operation on the target frame stored in the data table to be on-screen.
Further, the vertex coordinates in the step 1 and the primitive information are obtained in the following manner: and obtaining vertex coordinates and primitive information related to the graphics drawing related API call by monitoring.
Further, the obtaining manner of the model view matrix in the step 1 is as follows: and obtaining by monitoring vertex coordinates related to the coordinate transformation related API call and model view matrixes adopted by transformation.
Further, the screen space coordinates in step 1 are obtained by: by monitoring vertex coordinates and screen space coordinate acquisition involved in vertex shader-related API calls.
Further, call time is recorded when the API call is monitored.
Further, in the step 5, the merging operation is performed on all the target bounding boxes in the target bounding box group to obtain a first bounding box, which specifically includes: and adopting Vatti polygon clipping algorithm to realize merging operation.
Further, the calculation method of the target frame area in the step 5 is as follows: the method is realized by adopting a triangle splitting method or a cross-sectional method.
Advantageous effects
According to the method, vertex coordinates and primitive information are acquired through monitoring related interfaces of graphic rendering and screen-on operation, when the graphic application completes rendering operation of a certain data frame, coordinate transformation operation is determined to be executed or output of a vertex shader is monitored according to system implementation to obtain the rasterized screen space coordinates, so that a target boundary frame of a current data frame of the graphic application is determined, and then a data frame needing to be reserved is determined according to the difference between the area of the current target boundary frame and the area of a previous target boundary frame to complete screen-on operation again.
Detailed Description
The present invention will be described in detail with reference to the following examples.
The invention provides a rendering optimization method based on rasterization difference measurement, which has the following core ideas: the method comprises the steps of acquiring vertex coordinates and primitive information through monitoring related interface call of graphic rendering and screen-on operation, determining to execute coordinate transformation operation or monitor output of a vertex shader to obtain rasterized screen space coordinates according to system implementation when graphic application completes rendering operation of a certain data frame, determining a target boundary frame of a current data frame of the graphic application according to the rasterized screen space coordinates, and determining a data frame to be reserved according to the difference between the area of the current target boundary frame and the area of a previous target boundary frame so as to complete screen-on operation again.
The invention provides a rendering optimization method based on rasterized difference measurement, which specifically comprises the following steps:
step 1, in a system, monitoring of a graphics drawing related API call is started to acquire vertex coordinates and primitive information related to the call, and mapping relations among a graphics application ID, call time, the vertex coordinates and the primitive information are stored in a vertex primitive mapping table; starting monitoring of coordinate transformation related API calls to acquire vertex coordinates related to the call and a model view matrix adopted by the transformation, and storing mapping relations among graphic application IDs, call time, the vertex coordinates and the model view matrix into a vertex view matrix mapping table; and starting monitoring of the related API call of the vertex shader, acquiring vertex coordinates related to the call and corresponding screen space coordinates, and storing the mapping relation among the graphic application ID, the call time, the vertex coordinates and the screen space coordinates into a vertex space coordinate mapping table.
When the graphics processing interface adopted in the system is OpenGL, the above-mentioned monitoring method for the vertex shader related API call is to monitor vertex shader output vertex coordinates through conversion feedback (Transform Feedback), where the conversion feedback allows capturing the vertex shader output and saving it in the buffer object for further query and use, and specifically includes the following steps:
step 1.1, creating and binding a conversion feedback object, wherein example codes are as follows.
GLuint transformFeedback;
glGenTransformFeedbacks(1, &transformFeedback);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, transformFeedback);
Step 1.2, create a buffer object and bind to the conversion feedback object so that the data output by the vertex shader is stored in the buffer object, example code is as follows.
GLuint tbo;
glGenBuffers(1, &tbo);
glBindBuffer(GL_ARRAY_BUFFER, tbo);
glBufferData(GL_ARRAY_BUFFER, size, NULL, GL_STATIC_READ);
The size of the buffer is/(size) sufficient to hold the data that is expected to be captured
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, tbo);
Step 1.3, after linking the program objects, the shader variables to be captured, i.e. the changed coordinate outputs, are specified, the example code is as follows.
const GLchar* varyings[] = {"outputVarName"};
"OutputVarName" is the variable name of the output coordinates in the vertex shader
glTransformFeedbackVaryings(program, 1, varyings, GL_INTERLEAVED_ATTRIBS);
glLinkProgram(program);
And step 1.4, starting a conversion feedback mode, and executing a drawing command with normal graphics, wherein example codes are as follows.
glEnable(GL_RASTERIZER_DISCARD);
The// disable rasterization stage captures data only and does not render
glBeginTransformFeedback(GL_POINTS);
Shape of/specifying captured primitives
The graphics application normal draw call for the// draw command
glEndTransformFeedback();
glDisable(GL_RASTERIZER_DISCARD);
Step 1.5, reading the output coordinates, and example codes are as follows.
GLfloat*feedbackBuffer = (GLfloat*)glMapBufferRange(GL_TRANSFORM_FEEDBACK_BUFFER, 0, size, GL_MAP_READ_BIT);
Output coordinate data in/(and read feedbackBuffer)
...
glUnmapBuffer(GL_TRANSFORM_FEEDBACK_BUFFER);
Step 2, when the rendering operation of a certain data frame is monitored to be completed, taking the current data frame as a target frame, acquiring a graph application ID related to the target frame, marking the graph application ID as a target ID, acquiring a group of vertex coordinates with the latest calling time, which are the graph application ID as the target ID, from a vertex graphic element mapping table, marking each vertex as a target vertex coordinate, marking the group of vertex seats as a target vertex coordinate group, and enabling the target vertex coordinate group to correspond to the current data frame;
Obtaining primitive information corresponding to each vertex coordinate in a target vertex coordinate set as a target primitive to form a target primitive set;
searching records related to both the target ID and the target vertex coordinate set in the vertex view matrix mapping table and the vertex space coordinate mapping table, executing the step3 if related records exist in the vertex view matrix mapping table, and executing the step 4 if related records exist in the vertex space coordinate mapping table.
Step 3, traversing the target vertex coordinate set, for each target vertex coordinate in the target vertex coordinate set, acquiring all records related to the target ID and the target vertex coordinate in a vertex view matrix mapping table, taking a model view matrix with the latest calling time as a target model view matrix, and carrying out coordinate transformation, namely rasterization, on the target vertex coordinate based on the target model view matrix to obtain a target screen space coordinate; and 5, forming a target screen space coordinate set by the target screen space coordinates corresponding to all the target vertex coordinates in the target vertex coordinate set, and executing the step 5.
Step 4, traversing the target vertex coordinate set, and for each target vertex coordinate in the target vertex coordinate set, acquiring all records related to the target ID and the target vertex coordinate in a vertex space coordinate mapping table, and taking the screen space coordinate with the latest calling time in the record as a target screen space coordinate; and 5, forming a target screen space coordinate set by the target screen space coordinates corresponding to all the target vertex coordinates in the target vertex coordinate set, and executing the step 5.
Step 5, traversing the target primitive group and the target screen space coordinate group, and determining the boundary frame of each pair of target primitives and the target screen space coordinate as a target boundary frame; all target boundary frames corresponding to the target primitive group and the target screen space coordinate group form a target boundary frame group, merging operation is carried out on all target boundary frames in the target boundary frame group to obtain a first boundary frame containing all target boundary frames, the first boundary frame is a region to be screen-mounted corresponding to the target frame, and the area of the region to be screen-mounted is calculated and recorded as the target frame area;
If the current target frame area is smaller than the set threshold value of the existing target frame area of the target ID stored in the data table to be displayed, for example, the set threshold value is 80% of the existing target frame area, the data table to be displayed is kept unchanged, and the current target frame area is discarded; otherwise, after deleting the original data in the data table to be displayed, storing the target ID, the target frame and the current target frame area in the data table to be displayed.
To further improve execution efficiency, the present invention may represent the bounding box with minimum and maximum values of the abscissa in the target screen space coordinates.
The method comprises the steps of executing merging operation on all target boundary frames in a target boundary frame group to obtain a first boundary frame containing all target boundary frames, effectively removing overlapping coverage parts among all target boundary frames in the target boundary frame group through the merging operation, wherein the algorithm of the merging operation comprises the following steps: planar scan line algorithm (SWEEP LINE algorithm), weiler-Athereton polygon clipping algorithm, bentley-Ottmann algorithm, vatti polygon clipping algorithm, and the like.
The area of the area to be displayed is calculated by adopting a triangle splitting method, a cross-section method and the like, and the area of the area can be calculated by calling a geometric processing library such as CGAL in the actual implementation process.
And step 6, when receiving the on-screen signal, executing the on-screen operation on the target frame stored in the data table to be on-screen.
Examples
The rendering optimization method based on the rasterized difference measure provided by the invention realizes the optimization of graphics rendering through the Hook-related graphics processing API, and specifically comprises the following steps:
S1, capturing drawing calls and recording all drawing calls after the graphics application is monitored to execute an initialized graphics processing interface (such as glutInit) or the graphics application completes drawing of a data frame, monitoring a related rendering function related to Vertex coordinates and recording primitive information and Vertex coordinates, wherein the primitive information is the shape of a primitive, and a Vertex coordinate set is MAPELEMENT Vertex.
The relevant rendering functions mainly comprise:
glVertex (): this is a series of functions that specify vertex coordinates. * Representing different versions of the function, different numbers and types of coordinate parameters (e.g., glVertex2f, glVertex3i, etc.) may be accepted for two-dimensional and three-dimensional coordinates, respectively.
GLDRAWARRAYS (): this function is used to draw the vertex array. It does not directly input coordinates, but indirectly acquires coordinate data through a vertex array pointer set previously, and determines how to combine vertex data into primitives according to mode parameters (e.g., gl_ TRIANGLES, GL _lines, etc.).
GLDRAWELEMENTS (): similar to GLDRAWARRAYS (), this function is used to draw the vertex array from the index. It uses the previously set vertex array and index array to determine how to organize the vertex coordinates.
GlMultiDrawArrays () and glMultiDrawElements (): these two functions are extensions GLDRAWARRAYS () and GLDRAWELEMENTS (), which can be called at a time to draw multiple geometries.
Analysis determines that the graphics application is finished drawing a data frame through a Hook graphics processing interface such as glSwapBuffers of OpenGL or eglSwapBuffers of OpenGL ES.
S2, monitoring coordinate transformation operations of OpenGL, including Model view transformation (Model-View transformation), projection transformation (Projection transformation), viewport transformation (Viewport transformation) and the like, and changing display positions of vertex coordinates on a screen through the coordinate transformation operations, wherein functions to be monitored include GLTRANSLATE, GLROTATE, GLSCALE, GLLOADMATRIX and the like.
In this embodiment, the specific monitoring process is described by taking the function glTranslate as an example:
S2.1, calling the following codes during OpenGL initialization, and recording a current model view matrix:
GLfloat g_currentMatrix [16];
glGetFloatv(GL_MODELVIEW_MATRIX, g_currentMatrix);
S2.2, after the function glTranslate is called, the following code is executed to obtain a current model view matrix:
glGetFloatv(GL_MODELVIEW_MATRIX, g_currentMatrix);
Since the glTranslate function may be called multiple times during the actual execution, the data stored in g_ currentMatrix is the latest model view matrix.
S3, judging whether the graphics application uses a vertex shader through monitoring a related API, monitoring a function GLCREATESHADER of OpenGL, if the parameter is GL_ VERTEX _ SHADER, indicating that the vertex shader is used, and executing S4; otherwise, it is indicated that the vertex shader is not used, then S5 is performed.
S4, obtaining screen space coordinates of the rendered graph from Vertex coordinates output by a Vertex Shader (Vertex loader), and executing S6.
Typically, the Vertex shader outputs normalized device coordinates (Normalized Device Coordinates, NDC), which are mapped to window coordinates by viewport transformation, and primitive information and its coordinates are recorded in MAPELEMENT2 Vertex. In the actual execution process, the Vertex shader may be called for multiple times, and the data stored by MAPELEMENT < 2 > Vertex is the latest primitive information and the Vertex coordinates corresponding to the primitive information.
S5, based on the model view matrix stored in the g_ currentMatrix, performing coordinate transformation operation on the final Vertex coordinates of each primitive in the MAPELEMENT2Vertex, and finally obtaining screen space coordinates corresponding to the Vertex coordinates.
S6, when rendering of a certain data frame is finished, namely, the call of glSwapBuffers of OpenGL or eglSwapBuffers of OpenGL ES is just executed, judging that the data frame before clearing only keeps the current data frame or ignores the current data frame, wherein the specific steps are as follows:
S6.1, reading screen space coordinates of each graphic element from MAPELEMENT < 2> Vertex, and determining a bounding box (Bounding Box) of each graphic element (such as triangle, line segment and point) by using the rasterized coordinates. The bounding box is a boundary of a two-dimensional space.
S6.2, merging the two-dimensional bounding boxes generated by rasterizing each primitive into a large bounding box (polygon) to represent the total influence area of all drawing calls.
In this embodiment, a Vatti polygon clipping algorithm is used to merge two-dimensional bounding boxes to obtain a result polygon, and the algorithm can better solve the problem of polygon merge with holes, determine whether each edge belongs to an inner boundary or an outer boundary of the result polygon by classifying the edges of each polygon, and then implement the merge by constructing the result polygon.
S6.3, calculating the area of the polygon after combination as a rasterization area, marking the area as currentRasterizedArea, and storing the area into a table arrayRasterizedArea.
S6.4, traversing each previous data frame recorded in arrayRasterizedArea, namely rendering the generated data frame after the previous screen-on operation, and discarding the cache of the current data frame if the area currentRasterizedArea of the current data frame is smaller than a set threshold value of a previous data frame, and if the threshold value is 80% of the area of the previous data frame; if not, the previously buffered data frame in arrayRasterizedArea is discarded and the occupied frame buffer is released, and the current data frame is added to arrayRasterizedArea.
And S7, when a screen-on signal is received, for example, a Fence signal, acquiring arrayRasterizedArea the latest data frame to execute the screen-on operation.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A rendering optimization method based on rasterized difference measurement is characterized by comprising the following steps:
Step 1, obtaining vertex coordinates, primitive information, model view matrixes and screen space coordinates related to graphic rendering, storing a mapping relation among a graphic application ID, the vertex coordinates and the primitive information in a vertex primitive mapping table, storing the mapping relation among the graphic application ID, the vertex coordinates and the model view matrixes in the vertex view matrix mapping table, and storing the mapping relation among the graphic application ID, the vertex coordinates and the screen space coordinates in the vertex space coordinate mapping table;
Step 2, when the rendering operation of a certain data frame is monitored to be completed, taking the current data frame as a target frame, acquiring a graphic application ID of the target frame as a target ID, acquiring a last group of vertex coordinates corresponding to the target ID from a vertex graphic element mapping table, marking each vertex as a target vertex coordinate, and marking the group of vertex coordinates as a target vertex coordinate group, wherein the target vertex coordinate group corresponds to the current data frame; obtaining primitive information corresponding to each vertex coordinate in a target vertex coordinate set as a target primitive to form a target primitive set; searching records related to both the target ID and the target vertex coordinate set in the vertex view matrix mapping table and the vertex space coordinate mapping table, executing the step 3 if related records exist in the vertex view matrix mapping table, and executing the step 4 if related records exist in the vertex space coordinate mapping table;
step 3, traversing the target vertex coordinate set, for each target vertex coordinate, acquiring all records related to the target ID and the target vertex coordinate in a vertex view matrix mapping table, taking the last model view matrix as a target model view matrix, and rasterizing the target vertex coordinate into a target screen space coordinate by adopting the target model view matrix; forming a target screen space coordinate set by target screen space coordinates of all target vertex coordinates in the target vertex coordinate set, and executing the step 5;
Step 4, traversing the target vertex coordinate set, and for each target vertex coordinate, acquiring all records related to the target ID and the target vertex coordinate in a vertex space coordinate mapping table, and taking the final screen space coordinate as a target screen space coordinate; forming a target screen space coordinate set by the target screen space coordinates corresponding to all the target vertex coordinates in the target vertex coordinate set, and executing the step 5;
Step 5, traversing the target primitive group and the target screen space coordinate group, and determining the boundary frame of each pair of target primitives and the target screen space coordinate as a target boundary frame; all target boundary frames corresponding to the target primitive group and the target screen space coordinate group form a target boundary frame group, merging operation is carried out on all target boundary frames in the target boundary frame group to obtain a first boundary frame, and the area of the first boundary frame is recorded as the target frame area;
if the current target frame area is smaller than the set threshold value of the existing target frame area of the target ID stored in the data table to be displayed, keeping the data table to be displayed unchanged; otherwise, after deleting the original data in the data table to be displayed, storing the target ID, the target frame and the current target frame area into the data table to be displayed;
And step 6, when receiving the on-screen signal, executing the on-screen operation on the target frame stored in the data table to be on-screen.
2. The rendering optimization method according to claim 1, wherein the vertex coordinates and the primitive information in the step 1 are obtained by: and obtaining vertex coordinates and primitive information related to the graphics drawing related API call by monitoring.
3. The rendering optimization method according to claim 1, wherein the model view matrix in step 1 is obtained by: and obtaining by monitoring vertex coordinates related to the coordinate transformation related API call and model view matrixes adopted by transformation.
4. The rendering optimization method according to claim 1, wherein the screen space coordinates in step 1 are obtained by: by monitoring vertex coordinates and screen space coordinate acquisition involved in vertex shader-related API calls.
5. The rendering optimization method according to claim 2,3 and 4, wherein call time is recorded when an API call is monitored.
6. The rendering optimization method according to claim 1, wherein the step 5 performs a merging operation on all target bounding boxes in the target bounding box group to obtain a first bounding box, specifically: and adopting Vatti polygon clipping algorithm to realize merging operation.
7. The rendering optimization method according to claim 1, wherein the calculation method of the target frame area in the step 5 is as follows: the method is realized by adopting a triangle splitting method or a cross-sectional method.
CN202410313698.6A 2024-03-19 2024-03-19 Rendering optimization method based on rasterization difference measurement Pending CN118135076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410313698.6A CN118135076A (en) 2024-03-19 2024-03-19 Rendering optimization method based on rasterization difference measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410313698.6A CN118135076A (en) 2024-03-19 2024-03-19 Rendering optimization method based on rasterization difference measurement

Publications (1)

Publication Number Publication Date
CN118135076A true CN118135076A (en) 2024-06-04

Family

ID=91231234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410313698.6A Pending CN118135076A (en) 2024-03-19 2024-03-19 Rendering optimization method based on rasterization difference measurement

Country Status (1)

Country Link
CN (1) CN118135076A (en)

Similar Documents

Publication Publication Date Title
US9779536B2 (en) Graphics processing
US7280121B2 (en) Image processing apparatus and method of same
KR101286318B1 (en) Displaying a visual representation of performance metrics for rendered graphics elements
US8725466B2 (en) System and method for hybrid solid and surface modeling for computer-aided design environments
KR102258100B1 (en) Method and apparatus for processing texture
US6961065B2 (en) Image processor, components thereof, and rendering method
US9218686B2 (en) Image processing device
US20080211810A1 (en) Graphic rendering method and system comprising a graphic module
US10748332B2 (en) Hybrid frustum traced shadows systems and methods
JP2008500625A (en) Tile-based graphic rendering
US20110141112A1 (en) Image processing techniques
CN101496066A (en) Graphics processing unit with extended vertex cache
CN109785417B (en) Method and device for realizing OpenGL cumulative operation
CN101604454A (en) Graphic system
US8614704B2 (en) Method and apparatus for rendering 3D graphics data
CN113012269A (en) Three-dimensional image data rendering method and equipment based on GPU
CN111009033B (en) OpenGL-based lesion area visualization method and system
EP1519317A1 (en) Apparatus and method for antialiasing based on z buffer or normal vector information
US8553041B1 (en) System and method for structuring an A-buffer to support multi-sample anti-aliasing
ITMI20080999A1 (en) RENDERATION MODULE FOR GRAPHICS WITH TWO DIMENSIONS
CN115408305B (en) Method for detecting graphics rendering mode based on DMA redirection
CN118135076A (en) Rendering optimization method based on rasterization difference measurement
JP4408152B2 (en) Texture mapping method and texture mapping apparatus
US11436783B2 (en) Method and system of decoupled object space shading
US20240233263A1 (en) Primitive rendering method and apparatus, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination