CN116958389A - Object rendering method and device, electronic equipment and storage medium - Google Patents

Object rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116958389A
CN116958389A CN202210394558.7A CN202210394558A CN116958389A CN 116958389 A CN116958389 A CN 116958389A CN 202210394558 A CN202210394558 A CN 202210394558A CN 116958389 A CN116958389 A CN 116958389A
Authority
CN
China
Prior art keywords
rendered
rendering
preset
objects
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210394558.7A
Other languages
Chinese (zh)
Inventor
张鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202210394558.7A priority Critical patent/CN116958389A/en
Priority to PCT/CN2023/074536 priority patent/WO2023197729A1/en
Publication of CN116958389A publication Critical patent/CN116958389A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application provides an object rendering method, an object rendering device, electronic equipment and a storage medium, which can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like, and the method comprises the following steps: responding to an object rendering instruction of an object to be rendered, and acquiring position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page; determining rendering sequences corresponding to the network object to be rendered and the page element object to be rendered respectively according to the position information; and according to the rendering sequence, rendering the network object to be rendered and the page element object to be rendered in the target page to obtain a rendered target page element object. By adopting the scheme of the embodiment of the application, the consumption of the memory can be reduced, and the efficiency of object rendering can be improved.

Description

Object rendering method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to an object rendering method, an object rendering device, electronic equipment and a storage medium.
Background
In a scene where a game is made using a Unity engine, related art generally renders objects onto a rendering texture (render texture) and then renders the rendering texture onto a User Interface (UI) again. Wherein the Unity engine refers to a real-time interactive graphic authoring platform.
However, the related art needs to create a renderTexture object, not only wastes memory, but also causes poor rendering performance due to the introduction of additional camera and rendering instructions (DrawCall). Wherein, the camera refers to a Unity engine component among Unity engines for designating a visual range and rendering the visual range to a screen.
Disclosure of Invention
In order to solve the problems, the application provides an object rendering method, an object rendering device, electronic equipment and a storage medium.
In one aspect, the present application provides an object rendering method, including:
responding to an object rendering instruction of an object to be rendered, and acquiring position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page;
determining rendering sequences corresponding to the network object to be rendered and the page element object to be rendered respectively according to the position information;
And according to the rendering sequence, rendering the network object to be rendered and the page element object to be rendered in the target page to obtain a rendered target page element object.
In another aspect, an embodiment of the present application provides an object rendering apparatus, including:
the position information acquisition module is used for responding to an object rendering instruction of an object to be rendered and acquiring the position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page;
the rendering sequence determining module is used for determining the rendering sequence corresponding to each of the network object to be rendered and the page element object to be rendered according to the position information;
and the rendering module is used for rendering the network object to be rendered and the page element object to be rendered in the target page according to the rendering sequence to obtain the rendered target page element object.
In another aspect, the present application provides an electronic device for rendering an object, where the electronic device includes a processor and a memory, and at least one instruction or at least one program is stored in the memory, where the at least one instruction or at least one program is loaded and executed by the processor to implement the object rendering method as described above.
In another aspect, the present application proposes a computer readable storage medium having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded and executed by a processor to implement an object rendering method as described above.
In another aspect, the application proposes a computer program product comprising a computer program which, when executed by a processor, implements the object rendering method described above.
According to the object rendering method, device, electronic equipment and storage medium, the position information of the object to be rendered in the preset rendering window is obtained through responding to the object rendering instruction of the object to be rendered, the rendering sequence corresponding to the network object to be rendered and the page element object to be rendered is determined according to the position information, the network object to be rendered and the page element object to be rendered are rendered in the target page according to the rendering sequence, the rendered target page element object is obtained, the object to be rendered is placed in the UI renderer to be rendered, the network object to be rendered and the page element object to be rendered can be ordered, the consumption of the memory in the object rendering process is reduced, and the object rendering performance is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an implementation environment of an object rendering method according to an exemplary embodiment.
Fig. 2 is a flow diagram illustrating a method of object rendering according to an exemplary embodiment.
FIG. 3 is a flowchart illustrating a method of generating object rendering instructions according to an example embodiment.
FIG. 4 is a flowchart illustrating stitching a predetermined number of candidate mesh objects to be rendered to obtain stitched mesh objects, according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating a process for stitching the vertex coordinate information to obtain stitched vertex coordinate information according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a method for calculating the product of vertex coordinate information and a target transformation matrix to obtain transformed vertex coordinate information corresponding to each of a predetermined number of candidate mesh objects to be rendered according to an exemplary embodiment.
FIG. 7 is a flowchart illustrating stitching transformed vertex coordinate information to obtain stitched vertex coordinate information, according to an example embodiment.
FIG. 8 is a schematic diagram illustrating a stitched mesh object to be rendered, according to an example embodiment.
Fig. 9 is a flowchart illustrating determining preset position information of each of a plurality of preset objects to be rendered in a preset rendering window according to the above-described attribute information according to an exemplary embodiment.
Fig. 10 is a flowchart illustrating determining a rendering order of each of the network object to be rendered and the page element object to be rendered according to the above location information according to an exemplary embodiment.
Fig. 11 is a flowchart illustrating a method for rendering a network object to be rendered and a page element object to be rendered in a target page according to the above-described rendering order, resulting in a rendered target page element object according to an exemplary embodiment.
FIG. 12 is a flowchart illustrating a triggering of a target page according to an exemplary embodiment.
Fig. 13 is a schematic diagram of a model corresponding to a network object to be rendered according to an exemplary embodiment.
FIG. 14 is a diagram illustrating rendering of the network object to be rendered of FIG. 13 into a target page resulting in a corresponding target page element object, according to an example embodiment.
Fig. 15 is a schematic diagram showing a performance effect of the network object to be rendered in fig. 13 in a game scene according to an exemplary embodiment.
Fig. 16 is a block diagram of an object rendering apparatus according to an exemplary embodiment.
Fig. 17 is a block diagram of a hardware architecture of a server for object rendering, according to an example embodiment.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic view of an implementation environment of an object rendering method according to an exemplary embodiment. As shown in fig. 1, the implementation environment may include at least a terminal 01 and a server 02, where the terminal 01 and the server 02 may be directly or indirectly connected through a wired or wireless communication manner, and the present application is not limited herein. The terminal 01 may include a central processor (Central Processing Unit, CPU) and a graphics processor (Graphics Processing Unit, GPU), which may include a Canvas Renderer (Canvas Renderer).
Specifically, the terminal may be configured to obtain an object to be rendered and a rendering resource corresponding to the object to be rendered, and render the object to be rendered according to the rendering resource. Alternatively, the terminal 01 may be a computer device that performs 3D rendering based on the Unity engine, which may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart tv, a smart watch, or the like, but is not limited thereto.
Specifically, the server 02 may provide background services for the terminal. Optionally, the server 02 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms.
It should be noted that fig. 1 is only an example. In other scenarios, other implementation environments may also be included. For example, the implementation environment includes a server, which may be a computer device that performs 3D rendering based on the Unity engine. Wherein, the Unity engine refers to a 3D real-time interactive graphic authoring platform.
An application scene of the embodiment of the application is to render a 3D scene in various computer devices which perform 3D rendering based on a Unity engine. Optionally, the 3D scene may include, but is not limited to, the following: scene rendering in large 3D games, 3D animations, VR presentations.
Technical terms used in the embodiments of the present application are described below:
GPU: a graphics processor on the computer device for rendering graphics.
CPU: a central processor on the computer device for processing the program logic.
UI: the user interface can accept the input of the account corresponding to the terminal and display the execution result.
Rendering: a process of converting base primitives (triangles, quadrilaterals, points, lines, etc.) to 2D screen pixels.
Canvas: the canvas is used for the component drawn by the UI component, so that the UI component can be placed at a proper position on devices with different resolutions.
Px: pixels, which are defined as tiles of an image, have a definite position and assigned color value, and the color and position of the tiles determine how the image appears.
Draw Call: and the CPU sends a drawing instruction to the GPU. Generally, a draw call includes a series of steps or processes of setting a device context, setting a material, setting a mesh (mesh), setting a variable of a material, setting a texture, and the like.
And the Unity engine: 3D real-time interactive graphic authoring platforms are commonly used for making games.
Fig. 2 is a flow diagram illustrating a method of object rendering according to an exemplary embodiment. The method may be used in the implementation environment of fig. 1. The present specification provides method operational steps as described above, for example, in the examples or flowcharts, but may include more or fewer operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
S101, responding to an object rendering instruction of an object to be rendered, and acquiring position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page.
In an alternative embodiment, the object rendering instruction of the object to be rendered in step S101 may be generated by the CPU and sent to the GPU by the CPU, and the Unity component in the GPU, that is, the Canvas Renderer (Canvas Renderer) may obtain the position information of the object to be rendered in the preset rendering window in response to the object rendering instruction. The object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page.
The Canvas Renderer (Canvas Renderer) is described below: unity places the UI component in a generic basic type, mask image class (MaskableGraphic). All UI components, whether Text, raw image, button, table … …, etc., are inherited by MaskableGraphic. Canvas renderers are also inherited by MaskableGraphic, which can support drawing and rendering a Mesh (Mesh), and Unity regards Canvas renderers as a UI component, and enjoy the arrangement of UI components, i.e., the rendering order of Canvas renderers is not as opaque as far as the camera position is from near, as is not as translucent as the camera position is from far to near, but ordered as the position of objects within a preset rendering window.
Illustratively, the preset rendering window may be a hierarchical window (Hierarchy window). Wherein the Hierarchy window lists all objects to be rendered in the current scene. When adding or deleting game objects to be rendered in the scene, the objects to be rendered are also displayed and disappear in the Hierarchy window. Objects to be rendered may be reordered by dragging them up or down, or by making them "child" or "parent" to be rendered.
Optionally, the object rendering instruction may carry a rendering resource of the object to be rendered, where the rendering resource may include a series of steps or processes of setting a device context, setting a material, setting a mesh (mesh), setting a variable of the material, setting a texture, and the like.
Optionally, the object to be rendered refers to a visualized object that can be rendered to a screen. As an example, the Mesh object to be rendered may be a 3D Mesh object (3D Mesh). The page element object to be rendered may include, but is not limited to: UI text, UI buttons, etc.
In one possible embodiment, in a case where the number of network objects to be rendered is one, the CPU may generate an object rendering instruction corresponding to the one network object to be rendered and an object rendering instruction corresponding to the page element object to be rendered. And taking the object rendering instruction corresponding to the network object to be rendered and the object rendering instruction corresponding to the page element object to be rendered as final object rendering instructions.
In another possible embodiment, when the number of network objects to be rendered is plural, and there are no preset number of candidate mesh objects to be rendered that satisfy the preset condition in the plural network objects to be rendered, the CPU may generate an object rendering instruction corresponding to each of the plural network objects to be rendered and an object rendering instruction corresponding to the page element object to be rendered. And taking the object rendering instruction corresponding to each of the plurality of network objects to be rendered and the object rendering instruction corresponding to the page element object to be rendered as final object rendering instructions. As an example, meeting the preset condition may be understood as: if the component types of the plurality of network objects to be rendered are the same, the used materials are the same, and the used maps are the same, the plurality of network objects to be rendered are considered to be the preset number of candidate mesh objects to be rendered which meet the preset condition. Wherein the preset number is a positive integer greater than 1. Component types include, but are not limited to: text component, rawmage component, button component, etc.
In another possible embodiment, when the number of network objects to be rendered is plural, and there are a preset number of candidate mesh objects to be rendered that satisfy the preset condition in the plural network objects to be rendered, the preset number of candidate mesh objects to be rendered that satisfy the preset condition may be rendered and combined. Accordingly, fig. 3 is a flowchart illustrating a method for generating object rendering instructions according to an exemplary embodiment, where the method may further include:
S201, obtaining a plurality of network objects to be rendered and rendering resources corresponding to the plurality of grid objects to be rendered.
Alternatively, the above step S201 may be performed by the CPU. The CPU may obtain, in advance, a plurality of network objects to be rendered and rendering resources corresponding to each of the plurality of mesh objects to be rendered. As one example, the rendering resources may include, but are not limited to: setting device context, setting material, setting mesh (mesh), setting variables of material, setting texture, and the like.
S203, determining a preset number of candidate mesh objects to be rendered, wherein the rendering resources meet preset conditions, from the plurality of mesh objects to be rendered based on the rendering resources corresponding to the plurality of mesh objects to be rendered.
Alternatively, the above step S203 may be performed by the CPU. The CPU can select out from the plurality of mesh objects to be rendered according to the rendering resources corresponding to the plurality of mesh objects to be rendered, wherein the mesh objects to be rendered have the same component type, the same used material and the same used paste map, and the preset number of candidate mesh objects to be rendered, the rendering resources of which meet the preset conditions, are obtained.
For example, the number of the plurality of mesh objects (mesh 1, mesh2, mesh3, mesh4, and mesh 5) to be rendered is 5, and the CPU may determine, from the 5 meshes, that the mesh objects with the same component type, the same material to be used, and the same map to be used are mesh1, mesh2, and mesh3, and then the mesh1, the mesh2, and the mesh3 are preset number of candidate mesh objects to be rendered whose rendering resources satisfy preset conditions.
S205, splicing the preset number of candidate grid objects to be rendered to obtain spliced grid objects to be rendered.
Alternatively, the above step S205 may be performed by the CPU. The CPU may splice a preset number of candidate mesh objects to be rendered in a plurality of manners, which is not specifically limited in the embodiment of the present application.
In one possible embodiment, fig. 4 is a flowchart illustrating a method for stitching a preset number of candidate mesh objects to be rendered, to obtain a stitched mesh object to be rendered according to an exemplary embodiment. As shown in fig. 4, in the step S205, the stitching the preset number of candidate mesh objects to be rendered to obtain the stitched mesh objects to be rendered may include:
s2051, obtaining vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered.
S2053, splicing the vertex coordinate information to obtain spliced vertex coordinate information.
S2055, generating the spliced grid object to be rendered according to the spliced vertex coordinate information.
Alternatively, the above steps S2051 to S2055 may be executed by a CPU. In order to achieve the rendering merging, in the steps S2051 to S2055, the CPU may obtain vertex coordinate information corresponding to each of a preset number of candidate mesh objects to be rendered, splice the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered to obtain spliced vertex coordinate information, and finally generate the spliced mesh objects to be rendered according to the spliced vertex coordinate information.
Because the vertex coordinates can accurately reflect the position information of the grid objects, the embodiment of the application can improve the accuracy and efficiency of determining the spliced grid objects to be rendered by splicing the vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered and generating the spliced grid objects to be rendered based on the spliced vertex coordinate information, thereby improving the accuracy and efficiency of object rendering.
Fig. 5 is a flowchart illustrating stitching the vertex coordinate information to obtain stitched vertex coordinate information according to an exemplary embodiment, as shown in fig. 5, in a possible embodiment, stitching the vertex coordinate information to obtain stitched vertex coordinate information in step S2053 may include:
s20531, determining vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered, and converting from the space where each of the preset number of candidate grid objects to be rendered is located to space conversion operation required by world coordinate space.
S20533, determining target transformation matrixes corresponding to the preset number of candidate grid objects to be rendered respectively based on the preset transformation matrixes and the space transformation operation.
S20535, calculating the product of the vertex coordinate information and the target conversion matrix to obtain converted vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered.
S20537, splicing the converted vertex coordinate information to obtain spliced vertex coordinate information.
Alternatively, the above steps S20531-S20537 may be performed by the CPU, and since each Mesh object (Mesh) to be rendered has a model coordinate, the model coordinates between each Mesh are not universal, so that the positional relationship between vertices of different meshes cannot be described. In order to accurately describe the position relationship between the vertexes of different meshes and improve the precision of the vertex coordinate information splicing, the CPU can transfer the coordinates of different meshes to the world coordinate space.
Alternatively, in the step S20531, vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered may be determined, and a space conversion operation required for converting from a space in which each of the preset number of candidate mesh objects to be rendered is located to the world coordinate space may be performed. The world coordinate space is the space in which world coordinates are located, and the world coordinates refer to absolute coordinates of the system. As one example, the spatial transformation operation may include, but is not limited to: zoom operation, rotation operation, translation operation, etc. For example, the spatial transformation operations required for certain vertex coordinate information to world coordinate space are (2, 2) scaling, (0, 60,0) rotation and (2,0,2) translation.
Alternatively, in the above step S20533, the preset conversion matrix may correspond to a spatial conversion operation. For example, the spatial transformation operation includes a scaling operation, a rotation operation, and a translation operation, and the preset transformation matrix may include a scaling matrix, a rotation matrix, and a translation matrix. As an example, the preset transformation matrix may be a product of a scaling matrix, a rotation matrix, and a translation matrix.
As an example, the numerical value corresponding to the spatial conversion operation may be substituted into the preset conversion matrix, thereby obtaining the target conversion matrix.
Assume that the preset transition matrix is:
wherein the first matrix is a translation matrix, the second matrix is a rotation matrix, and the third matrix is a scaling matrix. Tx, ty, tz in the translation matrix refer to values corresponding to translation operations in the x-axis, y-axis, and z-axis directions, θ in the rotation matrix refers to rotation angles, and kx, ky, kz in the scaling rectangle refer to values corresponding to translation operations in the x-axis, y-axis, and z-axis directions.
Assuming that the spatial transformation operation required for transforming a candidate mesh object to be rendered from the space in which it is located into world coordinate space is (2, 2) scaling, (0, 60,0) rotation and (2,0,2) translation, (2, 2) scaling is substituted into the scaling matrix, (0, 60,0) rotation is substituted into the rotation matrix, and (2,0,2) translation is substituted into the translation matrix, a 4x4 target transformation matrix can be obtained, which specifically may be:
Optionally, in the above step S20535-step S20537, the product between the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered and the target transformation matrix corresponding to each of the preset number of candidate mesh objects to be rendered may be calculated, so as to obtain transformed vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered. In the above step S20537, the CPU may stitch the converted vertex coordinate information to obtain stitched vertex coordinate information.
In the embodiment of the disclosure, the spatial conversion is performed on the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered by the CPU, and the converted vertex coordinate information is spliced, so that different mesh objects to be rendered are all converted into the world coordinate space, the model coordinates of different mesh objects to be rendered can be universal, the position relation among different mesh objects to be rendered can be accurately described, the splicing precision of different mesh objects to be rendered is improved, and the object rendering precision is further improved; in addition, the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered is spliced to obtain integral spliced vertex coordinate information, so that a CPU only needs to generate one rendering instruction aiming at the integral spliced vertex coordinate information, the consumption of the rendering process on the computing resource of the CPU is reduced, and the burden of the CPU is reduced.
Fig. 6 is a flowchart illustrating a method for calculating the product of vertex coordinate information and a target transformation matrix to obtain transformed vertex coordinate information corresponding to each of a predetermined number of candidate mesh objects to be rendered according to an exemplary embodiment. As shown in fig. 6, in an alternative embodiment, the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered includes vertex buffer information and index buffer information, and in step S20535, the calculating the product of the vertex coordinate information and the target transformation matrix to obtain transformed vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered may include:
s205351, calculating products of vertex buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered and a target conversion matrix to obtain converted vertex buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered.
S205353, calculating the product of index buffer information corresponding to each of the preset number of candidate grid objects to be rendered and the target conversion matrix to obtain converted index buffer information corresponding to each of the preset number of candidate grid objects to be rendered.
S205355, using the converted vertex buffer information and the converted index buffer information as converted vertex coordinate information.
Wherein, vertex buffer information (VBO) is used to store vertex coordinates, vertex uv, vertex normals, vertex colors, etc. uv refers to coordinates in the horizontal and vertical directions. Index buffer Information (IBO) is used to store unsigned short integer (unsigned int) or unsigned integer data type (unsigned short).
Alternatively, the above step S205351-the above step S205355 may be performed by the CPU. The CPU can multiply vertex buffer zone information and index buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered with the target conversion matrix respectively to obtain converted vertex buffer zone information and index buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered, and the converted vertex buffer zone information and the converted index buffer zone information are used as the converted vertex coordinate information, so that the vertex buffer zone information and the index buffer zone information of different grid objects to be rendered are converted into world coordinate space, the integrity and the accuracy of vertex coordinate information conversion are improved, the position relation among different grid objects to be rendered can be accurately described through the converted vertex coordinate information with higher integrity and accuracy, the splicing precision of different grid objects to be rendered is improved, and the object rendering precision is further improved.
For example, the preset number of candidate mesh objects to be rendered includes mesh object to be rendered a (meshA) and mesh object to be rendered B (meshB), and meshA includes four vertices whose VBO is a 1 A 2 A 3 A 4 IBO is A 1 A 2 A 3 ,A 2 A 3 A 4 meshB includes four vertices whose VBO is B 1 B 2 B 3 B 4 IBO is B 1 B 2 B 3 ,B 2 B 3 B 4 . CPU will A 1 A 2 A 3 A 4 Multiplying the converted vertex buffer information with the target conversion matrix to obtain converted vertex buffer information (A) WP1 A WP2 A WP3 A WP4 ) Will B 1 B 2 B 3 B 4 Multiplying the target conversion matrix to obtain converted index buffer information (A) corresponding to meshA WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 ). CPU will B 1 B 2 B 3 B 4 Multiplying the converted vertex buffer information with the target conversion matrix to obtain converted vertex buffer information (B) WP1 B WP2 B WP3 B WP4 ) Will B 1 B 2 B 3 ,B 2 B 3 B 4 Multiplying the target conversion matrix to obtain converted index buffer information (B) corresponding to meshB WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 ). Will A WP1 A WP2 A WP3 A WP4 、A WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 、B WP1 B WP2 B WP3 B WP4 And B WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 As the converted vertex coordinate information.
FIG. 7 is a flowchart illustrating stitching transformed vertex coordinate information to obtain stitched vertex coordinate information, according to an example embodiment. As shown in fig. 7, in an alternative embodiment, in the step S20537, the concatenating the converted vertex coordinate information to obtain the concatenated vertex coordinate information may include:
s205371, splicing the converted vertex buffer area information to obtain spliced vertex buffer area information.
S205373, splicing the converted index buffer information to obtain spliced index buffer information.
S205375, using the spliced vertex buffer information and the spliced index buffer information as spliced vertex coordinate information.
Alternatively, the above steps S205371 to S205375 may be executed by the CPU. In the splicing process, the information dimension of the vertex buffer area and the information dimension of the index buffer area can be respectively spliced. In the vertex buffer information dimension, the converted vertex buffer information corresponding to each of the preset number of candidate mesh objects to be rendered can be spliced to obtain spliced vertex buffer information, and in the index buffer information dimension, the converted index buffer information corresponding to each of the preset number of candidate mesh objects to be rendered can be spliced to obtain spliced index buffer information. And then, the converted vertex buffer information and the converted index buffer information are used as converted vertex coordinate information. According to the vertex buffer information dimension and the index buffer information dimension, the vertex coordinate information is spliced, the coordinate information of different dimensions is combined, and the efficiency and the precision of the coordinate information splicing are improved, so that the generation efficiency and the precision of the spliced grid object to be rendered are improved, and the rendering performance of the grid object to be rendered is further improved.
For example, the transformed vertex buffer information corresponding to meshA is a WP1 A WP2 A WP3 A WP4 The converted vertex buffer information corresponding to meshB is B WP1 B WP2 B WP3 B WP4 The converted index buffer information corresponding to meshA is A WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 The converted index buffer information corresponding to meshB is B WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 Then A can be WP1 A WP2 A WP3 A WP4 And B WP1 B WP2 B WP3 B WP4 Spliced into a large VBO (A) WP1 A WP2 A WP3 A WP4 B WP1 B WP2 B WP3 B WP4 ) Will A WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 And B WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 Spliced into a large IBO (A WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 ,B WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 ). Finally, big VBO (A WP1 A WP2 A WP3 A WP4 B WP1 B WP2 B WP3 B WP4 ) And big IBO (A) WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 ,B WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 ) As the post-stitching vertex coordinate information.
Alternatively, the above step S2055 may be executed by the CPU. In the above step S2055, the CPU may perform the processing based on the spliced large VBO (a WP1 A WP2 A WP3 A WP4 B WP1 B WP2 B WP3 B WP4 ) And big IBO (A) WP1 A WP2 A WP3 ,A WP2 A WP3 A WP4 ,B WP1 B WP2 B WP3 ,B WP2 B WP3 B WP4 ) And generating the spliced grid object to be rendered.
FIG. 8 is a schematic diagram illustrating a stitched mesh object to be rendered, according to an example embodiment. As shown in fig. 8, by the above-mentioned stitching manner, a preset number of candidate mesh objects to be rendered (i.e. a preset number of "grass") may be stitched into an integral stitched mesh object to be rendered. Wherein "top" in fig. 8 refers to the top of one of the preset number of "grasses", and "bottom" in fig. 9 refers to the bottom of one of the preset number of "grasses".
S207, generating a first object rendering instruction corresponding to the spliced grid object to be rendered, a second object rendering instruction corresponding to the rest grid objects to be rendered and a third object rendering instruction corresponding to the page element objects to be rendered; the remaining mesh objects to be rendered are mesh objects except the preset number of candidate mesh objects to be rendered among the plurality of mesh objects to be rendered.
S209, taking the first object rendering instruction, the second object rendering instruction and the third object rendering instruction as object rendering instructions.
Alternatively, the above-described step S209 to step S2011 may be executed by the CPU. The CPU may generate a first object rendering instruction for the stitched mesh object to be rendered, generate a second rendering instruction for the mesh object other than the preset number of candidate mesh objects to be rendered (i.e., the mesh object that is not stitched), generate a third object rendering instruction for the page element object to be rendered, and use the first object rendering instruction, the second object rendering instruction, and the third object rendering instruction as the finally generated object rendering instruction.
For example, the number of the plurality of mesh objects to be rendered is 5 (mesh 1, mesh2, mesh3, mesh4, mesh 5), and the CPU may determine, from the 5 meshes, that the mesh objects with the same component type, the same material used, and the same map used are mesh1, mesh2, and mesh3, and then the mesh1, the mesh2, and the mesh3 serve as preset number of candidate mesh objects to be rendered whose rendering resources satisfy preset conditions.
And the CPU splices the mesh1, the mesh2 and the mesh3 according to the mode to obtain spliced mesh objects to be rendered.
The CPU generates a first object rendering instruction corresponding to the spliced mesh object to be rendered, a second object rendering instruction corresponding to the mesh2, a second object rendering instruction corresponding to the mesh3 and a third object rendering instruction corresponding to the page element object to be rendered, and the first object rendering instruction, the second object rendering instruction and the third object rendering instruction are used as finally generated object rendering instructions.
In the embodiment of the disclosure, the CPU can splice a preset number of candidate grid objects to be rendered, wherein the rendering resources of the preset number of candidate grid objects to be rendered meet preset conditions, and the spliced grid objects to be rendered are obtained, so that the CPU only needs to generate one rendering instruction aiming at the preset number of candidate grid objects to be rendered, the consumption of the rendering process on the computing resources of the CPU is reduced, and the burden of the CPU is reduced.
Fig. 9 is a flowchart illustrating determining preset position information of each of a plurality of preset objects to be rendered in a preset rendering window according to the above-described attribute information according to an exemplary embodiment. In an alternative embodiment, as shown in fig. 9, the method may further include:
S301, responding to object ordering instructions of a plurality of preset objects to be rendered, and acquiring attribute information corresponding to each of the plurality of preset objects to be rendered; the preset object to be rendered comprises the object to be rendered.
S303, determining preset position information of each of a plurality of preset objects to be rendered in a preset rendering window according to the attribute information.
Accordingly, in the step S101, the obtaining the position information of the object to be rendered in the preset rendering window may include:
and acquiring the position information of the grid object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window from the preset position information.
Optionally, in the step S301, the terminal may obtain attribute information corresponding to each of the plurality of preset objects to be rendered based on the object ordering instruction of the plurality of preset objects to be rendered in response to the account corresponding to the terminal. As one example, the attribute information may include, but is not limited to: object creation time, object creation type, etc.
Optionally, in the step S303, the terminal may determine preset position information of each of the plurality of preset objects to be rendered in the preset rendering window according to attribute information corresponding to each of the plurality of preset objects to be rendered. As an example, the attribute information is creation time, and then the preset object to be rendered created earliest is located at the top of the preset rendering window, the preset object to be rendered created latest is located at the bottom of the preset rendering window, or the preset object to be rendered created latest is located at the top of the preset rendering window, the preset object to be rendered created earliest is located at the bottom of the preset rendering window, etc. As another example, if the attribute information is an object creation type, a mapping relationship between the object type and the position may be pre-established, and preset position information of the object type in a preset rendering window may be determined according to the mapping relationship.
Optionally, the plurality of preset objects to be rendered may be sequenced in the preset rendering window according to preset position information of each of the plurality of preset objects to be rendered in the preset rendering window. For example, the plurality of preset objects to be rendered are the preset object 1 to be rendered, the preset object 2 to be rendered and the preset object 3 to be rendered, and the preset position information of each of the preset object 1 to be rendered, the preset object 2 to be rendered and the preset object 3 to be rendered in the preset rendering window is top, middle and bottom, so that the preset object 1 to be rendered can be ordered at the top of the preset rendering window, the preset object 2 to be rendered can be ordered at the middle of the preset rendering window, and the preset object 3 to be rendered can be ordered at the top of the preset rendering window.
In one embodiment, the steps S301 to S303 may be performed by a CPU. In this case, in the above step S101, the preset position information may be carried in the rendering instruction sent by the CPU to the canvas renderer, and after the canvas renderer receives the rendering instruction, the position information of the mesh object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window may be obtained from the preset position information.
In another embodiment, steps S301-S303 described above may be performed by a canvas renderer. In this case, in the above step S101, after the canvas renderer receives the rendering instruction sent by the CPU, the preset position information may be locally acquired, and from the preset position information, the position information of the mesh object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window are acquired.
In another embodiment, the above steps S301-S303 may be performed by other modules in the terminal except for the canvas renderer, the CPU. In this case, in the above step S101, after the canvas renderer receives the rendering instruction sent by the CPU, the other modules may send the preset position information to the canvas renderer, and the canvas renderer acquires, from the preset position information, the position information of the mesh object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window.
According to the embodiment of the application, the preset position information of the preset object to be rendered in the preset rendering window is determined according to the attribute information of the preset object to be rendered, so that the reliability and rationality of the determination of the preset position information can be ensured; according to the preset position information, the position information of the mesh object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window are obtained, and the determination accuracy and the determination efficiency of the position information of the object to be rendered in the preset rendering window can be improved; in addition, according to the preset position information, the position information of the mesh object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window are obtained, the network object to be rendered can be ensured to be rendered according to the position of the mesh object to be rendered in the preset rendering window, the network object to be rendered is placed in a UI renderer to be rendered, the network object to be rendered can be ordered with the page element object to be rendered, the consumption of memory in the object rendering process is reduced, and the object rendering performance is improved.
S103, according to the position information, determining rendering sequences corresponding to the network object to be rendered and the page element object to be rendered.
Alternatively, step S103 described above may be performed by a canvas renderer. The canvas renderer can determine the rendering sequence corresponding to the network object to be rendered and the page element object to be rendered according to the position information of the network object to be rendered and the page element object to be rendered in the preset rendering window.
The embodiment of the present application may determine the rendering order in various manners, which is not specifically limited herein.
Fig. 10 is a flowchart illustrating determining a rendering order of each of the network object to be rendered and the page element object to be rendered according to the above location information according to an exemplary embodiment. As shown in fig. 10, in a possible embodiment, in the step S103, determining, according to the location information, a rendering order corresponding to each of the network object to be rendered and the page element object to be rendered may include:
s1031, acquiring preset position sequence mapping information; the preset position mapping information is used for representing the mapping relation between the position information and the rendering sequence.
S1033, determining a rendering sequence corresponding to the position information of the network object to be rendered and a rendering sequence corresponding to the position information of the page element object to be rendered according to the preset position sequence mapping information.
Alternatively, the above steps S1031-S1033 may be performed by a Canvas Renderer (Canvas Renderer). In the step S1031, the canvas renderer may pre-establish a mapping relationship between the position information and the rendering order to obtain the preset position order mapping information. For example, the earlier the ordering position in the preset rendering window, the earlier the rendering order. Or, the more forward the ordering position in the preset rendering window, the more backward the rendering order, etc. In the above step S1033, the canvas renderer may determine a rendering order corresponding to the position information of the network object to be rendered and a rendering order corresponding to the position information of the page element object to be rendered according to the preset position order mapping information. For example, the preset position order map information characterizes: the earlier the ordering position in the preset rendering window is, the earlier the rendering order is. And if the position information of the network object to be rendered is positioned at the top of the preset rendering window, arranging the rendering sequence of the network object to be rendered at the first position.
According to the method and the device for rendering the network object, the rendering sequence corresponding to the position information of the network object to be rendered and the rendering sequence corresponding to the position information of the page element object to be rendered are determined according to the preset position sequence mapping information, so that the determination accuracy of the rendering sequence can be improved, the network object to be rendered is ensured to be rendered according to the position of the network object to be rendered in the preset rendering window, the mesh object to be rendered is placed in the UI renderer to be rendered, the network object to be rendered and the page element object to be rendered can be ordered, the consumption of memory in the object rendering process is reduced, and the object rendering performance is improved.
S105, according to the rendering sequence, rendering the network object to be rendered and the page element object to be rendered in the target page to obtain the rendered target page element object.
Optionally, the object rendering instruction may carry rendering resources corresponding to the mesh object to be rendered and the page element object to be rendered, and the canvas renderer may render the network object to be rendered in the target page based on the rendering resources corresponding to the network object to be rendered according to the rendering sequence, and render the page element object to be rendered in the target page based on the rendering resources corresponding to the page element object to be rendered, so as to obtain the rendered target page element object.
For example, the number of the network objects to be rendered is the network object 1 to be rendered, the network object 2 to be rendered and the network object 3 to be rendered, the page element objects to be rendered are the UI text and the UI button, the rendering sequence corresponding to the network object 1 to be rendered, the network object 2 to be rendered, the network object 3 to be rendered, the UI text and the UI button is the network object 1 to be rendered, the network object 2 to be rendered, the network object 3 to be rendered, the UI text to be UI button, and the canvas renderer may render the network object 1 to be rendered first, then render the network object 2 to be rendered, then render the network object 3 to be rendered, then render the UI text, and finally render the UI button.
In the embodiment of the disclosure, the rendering is performed by the renderer without additionally establishing a rendering texture object (rendering texture), so that the consumption of the memory in the rendering process is saved.
Fig. 11 is a flowchart illustrating a method for rendering a network object to be rendered and a page element object to be rendered in a target page according to the above-described rendering order, resulting in a rendered target page element object according to an exemplary embodiment. As shown in fig. 11, in a possible embodiment, in the step S105, the rendering the network object to be rendered and the page element object to be rendered in the target page according to the rendering order to obtain the rendered target page element object may include:
s1051, acquiring a preset rendering area in a target page.
S1053, projecting pixels of the network object to be rendered into the target page to obtain projected first pixels, and projecting pixels of the page element object to be rendered into the target page to obtain projected second pixels.
S1055, taking a first pixel in a preset rendering area as a first target pixel, and taking a second pixel in the preset rendering area as a second target pixel.
S1057, rendering the first target pixel and the second target pixel in the target page according to the rendering sequence to obtain the target page element object.
Alternatively, steps S1051-S1057 described above may be performed by Canvas Renderer.
Alternatively, a preset rendering area may be preset in the target page. As an example, the preset rendering area may be a Stencil buffer (StencilBuffer) having a Stencil value of 1 for the pixels in the StencilBuffer.
Optionally, in the step S1051-S1053, the Canvas Renderer may acquire the preset rendering area, project the pixel of the network object to be rendered into the target page to obtain the first projected pixel, and project the pixel of the page element object to be rendered into the target page to obtain the second projected pixel. If the projected pixel is located in the preset rendering area, the pixel with the Stencil value of 1 and the Stencil value of 1 is rendered, and if the projected pixel is not located in the preset rendering area, the pixel with the Stencil value of not 1 and the pixel with the Stencil value of not 1 is blocked, namely, is not rendered.
Alternatively, in the step S1055, the Canvas rendering may use a first pixel in the preset rendering area, i.e., a first pixel having a Stencil value of 1, as the first target pixel, and a second pixel in the preset rendering area, i.e., a second pixel having a Stencil value of 1, as the second target pixel.
Optionally, in the step S1057, the Canvas Renderer may sequentially render the first target pixel and the second target pixel according to the rendering order corresponding to the network object to be rendered and the page element object to be rendered, so as to obtain the target page element object. For example, the number of the network objects to be rendered is the network object 1 to be rendered, the network object 2 to be rendered and the network object 3 to be rendered, the page element objects to be rendered are UI text and UI buttons, the rendering sequence corresponding to the network object 1 to be rendered, the network object 2 to be rendered, the network object 3 to be rendered, the UI text and the UI buttons is the network object 1 to be rendered, the network object 2 to be rendered, the UI text to be rendered, and the UI buttons, and then the canvas renderer may firstly render the first target pixel corresponding to the network object 1 to be rendered, then render the first target pixel corresponding to the network object 2 to be rendered, then render the first target pixel corresponding to the network object 3 to be rendered, then render the second target pixel corresponding to the UI text, and finally render the second target pixel corresponding to the UI buttons.
In the embodiment of the application, the preset rendering area is preset, the pixels of the network object to be rendered and the pixels of the page element object to be rendered are projected to the target page, the pixels which are not in the preset rendering area are not rendered, and the pixels which are not in the preset rendering area are not rendered, so that the object to be rendered can be shielded by the UI, and the grid object to be rendered can be shielded by the UI, namely, the network object to be rendered has the shielding property of the page element, and the rendering performance and the user experience of the object to be rendered are improved.
FIG. 12 is a flowchart illustrating a triggering of a target page according to an exemplary embodiment. In an alternative embodiment, as shown in fig. 12, after the network object to be rendered and the page element object to be rendered are rendered in the target page according to the rendering order, the method may further include:
s301, responding to target operation triggered on the basis of the target page, and acquiring an operation position of the target operation in the target page.
S303, transmitting target rays in the target page by taking the operation position as a starting point.
S305, acquiring a candidate page element object touching the target ray from the target page element object, so that the candidate page element object processes target operation.
Alternatively, the above steps S301 to S305 may be performed by the terminal.
Optionally, in the step S301, when the account corresponding to the terminal clicks on the target page, a target operation may be triggered, and the terminal responds to the target operation to obtain an operation position of the target operation in the target page. Optionally, in the step S303, the terminal transmits a ray into the target page with the operation position as a starting point. Alternatively, in the above step S305, since the number of target page element objects may be plural, the ray may touch plural page element objects, and the page element object touched by the target ray for the first time may be used as a candidate page element object, and the target operation may be processed using the candidate page element object.
According to the embodiment of the application, the target operation is transmitted by responding to the target operation triggered based on the target page, and the target operation is processed by using the candidate page element object touched by the target ray, so that the grid object to be rendered can be used as the UI element for operation (clicking or clicking), namely, the network object to be rendered has the clicking or clicking attribute of the page element, the rendering performance of the object to be rendered is improved, and the user experience is improved.
Fig. 13 is a schematic diagram of a model corresponding to a network object to be rendered according to an exemplary embodiment. By adopting the scheme of the embodiment of the application to render the network object to be rendered in the figure 13 into the target page, a corresponding target page element object schematic diagram can be obtained. Fig. 14 is a schematic diagram of rendering the network object to be rendered in fig. 13 into a target page, resulting in a corresponding target page element object, as shown in fig. 14, in which the network object to be rendered is already a page element object, and has the attribute (ordering, shielding, clicking, etc.) of the page element, according to an exemplary embodiment. Fig. 15 is a schematic diagram showing a performance effect of the network object to be rendered in fig. 13 in a game scene according to an exemplary embodiment, and as shown in fig. 15, the scheme of the embodiment of the application can improve the rendering performance of the object to be rendered.
Fig. 16 is a block diagram of an object rendering apparatus according to an exemplary embodiment. As shown in fig. 16, the apparatus may include at least:
a position information obtaining module 401, configured to obtain, in response to an object rendering instruction of an object to be rendered, position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page.
A rendering order determining module 403, configured to determine, according to the location information, a rendering order corresponding to each of the network object to be rendered and the page element object to be rendered.
And a rendering module 405, configured to render the network object to be rendered and the page element object to be rendered in the target page according to the rendering order, so as to obtain a rendered target page element object.
In an alternative embodiment, the rendering order determining module may include:
a preset position sequence mapping information acquisition unit for acquiring preset position sequence mapping information; the preset position mapping information is used for representing the mapping relation between the position information and the rendering sequence.
And the rendering sequence determining unit is used for determining the rendering sequence corresponding to the position information of the network object to be rendered and the rendering sequence corresponding to the position information of the page element object to be rendered according to the preset position sequence mapping information.
In an alternative embodiment, the rendering module includes:
the preset rendering area acquisition unit is used for acquiring the preset rendering area in the target page.
The projection unit is used for projecting the pixels of the network object to be rendered into the target page to obtain projected first pixels, and projecting the pixels of the page element object to be rendered into the target page to obtain projected second pixels.
And the pixel determining unit is used for taking the first pixel positioned in the preset rendering area as a first target pixel and taking the second pixel positioned in the preset rendering area as a second target pixel.
And the rendering unit is used for rendering the first target pixel and the second target pixel in the target page according to the rendering sequence to obtain the target page element object.
In an alternative embodiment, the apparatus may further include:
and the operation position acquisition module is used for responding to the target operation triggered on the basis of the target page and acquiring the operation position of the target operation in the target page.
And the transmitting module is used for transmitting the target rays in the target page by taking the operation position as a starting point.
And the processing module is used for acquiring a candidate page element object touching the target ray from the target page element object so as to enable the candidate page element object to process the target operation.
In an alternative embodiment, the apparatus further comprises:
the attribute information acquisition module is used for responding to object ordering instructions of a plurality of preset objects to be rendered and acquiring attribute information corresponding to each of the plurality of preset objects to be rendered; the preset object to be rendered includes the object to be rendered.
And the preset position information determining module is used for determining preset position information of each of the plurality of preset objects to be rendered in the preset rendering window according to the attribute information.
The above-mentioned position information acquisition module includes:
the position information acquisition unit is used for acquiring the position information of the grid object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window from the preset position information.
In an optional embodiment, the number of the network objects to be rendered is a plurality, and the apparatus further includes:
the object and resource acquisition module is used for acquiring a plurality of network objects to be rendered and rendering resources corresponding to the plurality of grid objects to be rendered respectively.
The candidate mesh object obtaining module is used for determining a preset number of candidate mesh objects to be rendered, the rendering resources of which meet preset conditions, from the plurality of mesh objects to be rendered based on the rendering resources corresponding to the plurality of mesh objects to be rendered.
The spliced grid object to be rendered obtaining module is used for splicing the preset number of candidate grid objects to be rendered to obtain spliced grid objects to be rendered.
The instruction generation module is used for generating a first object rendering instruction corresponding to the spliced grid object to be rendered, a second object rendering instruction corresponding to the rest grid objects to be rendered and a third object rendering instruction corresponding to the page element objects to be rendered; the remaining mesh objects to be rendered are mesh objects except the preset number of candidate mesh objects to be rendered among the plurality of mesh objects to be rendered.
The instruction determining module is configured to take the first object rendering instruction, the second object rendering instruction, and the third object rendering instruction as the object rendering instructions.
In an optional embodiment, the spliced mesh object to be rendered obtaining module includes:
And the vertex coordinate information acquisition sub-module is used for acquiring the vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered.
And the vertex coordinate information splicing sub-module is used for splicing the vertex coordinate information to obtain spliced vertex coordinate information.
And the spliced object sub-module is used for generating the spliced grid object to be rendered according to the spliced vertex coordinate information.
In an alternative embodiment, the vertex coordinate information stitching sub-module includes:
and the conversion unit is used for determining vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered and converting the vertex coordinate information from the space where each of the preset number of candidate grid objects to be rendered is located to space conversion operation required by world coordinate space.
The target transformation matrix determining unit is used for determining the target transformation matrix corresponding to each of the preset number of candidate grid objects to be rendered based on the preset transformation matrix and the space transformation operation.
And the product calculation unit is used for calculating the product of the vertex coordinate information and the target conversion matrix to obtain converted vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered.
And the coordinate splicing unit is used for splicing the converted vertex coordinate information to obtain the spliced vertex coordinate information.
In an optional embodiment, the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered includes vertex buffer information and index buffer information, and the product calculating unit includes:
and the vertex buffer conversion subunit is used for calculating the product of the vertex buffer information corresponding to each of the preset number of candidate grid objects to be rendered and the target conversion matrix to obtain converted vertex buffer information corresponding to each of the preset number of candidate grid objects to be rendered.
And the index buffer conversion subunit is used for calculating the product of the index buffer information corresponding to each of the preset number of candidate grid objects to be rendered and the target conversion matrix to obtain converted index buffer information corresponding to each of the preset number of candidate grid objects to be rendered.
And a vertex position conversion subunit, configured to use the converted vertex buffer information and the converted index buffer information as the converted vertex coordinate information.
In an alternative embodiment, the coordinate stitching unit includes:
and the vertex buffer splicing subunit is used for splicing the converted vertex buffer information to obtain spliced vertex buffer information.
And the index buffer splicing subunit is used for splicing the converted index buffer information to obtain spliced index buffer information.
And the coordinate splicing subunit is used for generating the spliced vertex coordinate information according to the spliced vertex buffer information and the spliced index buffer information.
It will be appreciated that in the specific embodiments of the present application, related data such as user information is involved, and when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It should be noted that, the device embodiment provided by the embodiment of the present application and the method embodiment described above are based on the same inventive concept.
The embodiment of the application also provides an electronic device for rendering an object, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the object rendering method provided by any embodiment.
Embodiments of the present application also provide a computer readable storage medium that may be provided in a terminal to store at least one instruction or at least one program related to implementing an object rendering method in a method embodiment, where the at least one instruction or the at least one program is loaded and executed by a processor to implement the object rendering method as provided in the method embodiment above.
Alternatively, in the present description embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The memory of the embodiments of the present specification may be used for storing software programs and modules, and the processor executes various functional applications and data processing by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the object rendering method provided by the above-mentioned method embodiment.
The object rendering method provided by the embodiment of the application can be executed in a terminal, a computer terminal, a server or similar computing devices. Taking the example of running on a server, fig. 17 is a block diagram of the hardware architecture of a server for object rendering, according to an example embodiment. As shown in fig. 17, the server 500 may vary considerably in configuration or performance, and may include one or more central processing units (Central Processing Units, CPU) 510 (the central processing unit 510 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 530 for storing data, one or more storage mediums 520 (e.g., one or more mass storage devices) storing applications 523 or data 522. Wherein the memory 530 and storage medium 520 may be transitory or persistent storage. The program stored on the storage medium 520 may include one or more modules, each of which may include a series of instruction operations on a server. Still further, the central processor 510 may be arranged to communicate with a storage medium 520, and to execute a series of instruction operations in the storage medium 520 on the server 500. The server 500 may also include one or more power supplies 560, one or more wired or wireless network interfaces 550, one or more input/output interfaces 540, and/or one or more operating systems 521, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
Input-output interface 540 may be used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 500. In one example, the input/output interface 540 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the input/output interface 540 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 17 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the server 500 may also include more or fewer components than shown in fig. 17, or have a different configuration than shown in fig. 17.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is only illustrative of the present application and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present application.

Claims (14)

1. An object rendering method, the method comprising:
responding to an object rendering instruction of an object to be rendered, and acquiring position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page;
Determining rendering sequences corresponding to the network object to be rendered and the page element object to be rendered respectively according to the position information;
and according to the rendering sequence, rendering the network object to be rendered and the page element object to be rendered in the target page to obtain a rendered target page element object.
2. The method according to claim 1, wherein determining, according to the location information, a rendering order of each of the network object to be rendered and the page element object to be rendered, includes:
acquiring preset position sequence mapping information; the preset position mapping information is used for representing the mapping relation between the position information and the rendering sequence;
and determining a rendering sequence corresponding to the position information of the network object to be rendered and a rendering sequence corresponding to the position information of the page element object to be rendered according to the preset position sequence mapping information.
3. The method for rendering the object according to claim 1, wherein the rendering the network object to be rendered and the page element object to be rendered in the target page according to the rendering order, to obtain a rendered target page element object, includes:
Acquiring a preset rendering area in the target page;
projecting the pixels of the network object to be rendered into the target page to obtain first projected pixels, and projecting the pixels of the page element object to be rendered into the target page to obtain second projected pixels;
taking a first pixel positioned in the preset rendering area as a first target pixel, and taking a second pixel positioned in the preset rendering area as a second target pixel;
and rendering the first target pixel and the second target pixel in the target page according to the rendering sequence to obtain the target page element object.
4. The object rendering method according to claim 1, wherein after the rendering the network object to be rendered and the page element object to be rendered in the target page according to the rendering order, the method further comprises:
responding to target operation triggered on the basis of the target page, and acquiring an operation position of the target operation in the target page;
transmitting a target ray in the target page by taking the operation position as a starting point;
And acquiring a candidate page element object touched with a target ray from the target page element object so that the candidate page element object processes the target operation.
5. The object rendering method according to any one of claims 1 to 4, characterized in that the method further comprises:
responding to object ordering instructions of a plurality of preset objects to be rendered, and acquiring attribute information corresponding to each of the plurality of preset objects to be rendered; the preset object to be rendered comprises the object to be rendered;
determining preset position information of each of the plurality of preset objects to be rendered in the preset rendering window according to the attribute information;
the obtaining the position information of the object to be rendered in the preset rendering window includes:
and acquiring the position information of the grid object to be rendered in the preset rendering window and the position information of the page element object to be rendered in the preset rendering window from the preset position information.
6. The object rendering method according to any one of claims 1 to 4, wherein the number of the network objects to be rendered is plural, the method further comprising:
Acquiring a plurality of network objects to be rendered and rendering resources corresponding to the plurality of network objects to be rendered respectively;
determining a preset number of candidate mesh objects to be rendered, of which the rendering resources meet preset conditions, from the plurality of mesh objects to be rendered based on rendering resources corresponding to the plurality of mesh objects to be rendered respectively;
splicing the preset number of candidate grid objects to be rendered to obtain spliced grid objects to be rendered;
generating a first object rendering instruction corresponding to the spliced grid object to be rendered, a second object rendering instruction corresponding to the rest grid objects to be rendered and a third object rendering instruction corresponding to the page element objects to be rendered; the remaining mesh objects to be rendered are mesh objects except the preset number of candidate mesh objects to be rendered in the plurality of mesh objects to be rendered;
and taking the first object rendering instruction, the second object rendering instruction and the third object rendering instruction as the object rendering instructions.
7. The method for rendering objects according to claim 6, wherein the stitching the predetermined number of candidate mesh objects to be rendered to obtain the stitched mesh objects to be rendered includes:
Obtaining vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered;
splicing the vertex coordinate information to obtain spliced vertex coordinate information;
and generating the spliced grid object to be rendered according to the spliced vertex coordinate information.
8. The method for rendering an object according to claim 7, wherein the stitching the vertex coordinate information to obtain stitched vertex coordinate information includes:
determining vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered, and converting from the space in which each of the preset number of candidate grid objects to be rendered is located to space conversion operation required by world coordinate space;
determining target transformation matrixes corresponding to the preset number of candidate grid objects to be rendered respectively based on a preset transformation matrix and the space transformation operation;
calculating the product of the vertex coordinate information and the target transformation matrix to obtain transformed vertex coordinate information corresponding to each of the preset number of candidate grid objects to be rendered;
and splicing the converted vertex coordinate information to obtain the spliced vertex coordinate information.
9. The method for rendering objects according to claim 8, wherein the vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered includes vertex buffer information and index buffer information, and the calculating the product of the vertex coordinate information and the target transformation matrix to obtain transformed vertex coordinate information corresponding to each of the preset number of candidate mesh objects to be rendered includes:
calculating products of vertex buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered and the target conversion matrix to obtain converted vertex buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered;
calculating the product of index buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered and the target conversion matrix to obtain converted index buffer zone information corresponding to each of the preset number of candidate grid objects to be rendered;
and taking the converted vertex buffer information and the converted index buffer information as the converted vertex coordinate information.
10. The method of claim 9, wherein the concatenating the transformed vertex coordinate information to obtain the concatenated vertex coordinate information comprises:
Splicing the converted vertex buffer zone information to obtain spliced vertex buffer zone information;
splicing the converted index buffer information to obtain spliced index buffer information;
and generating the spliced vertex coordinate information according to the spliced vertex buffer information and the spliced index buffer information.
11. An object rendering apparatus, the apparatus comprising:
the position information acquisition module is used for responding to an object rendering instruction of an object to be rendered and acquiring the position information of the object to be rendered in a preset rendering window; the object to be rendered comprises a network object to be rendered and a page element object to be rendered corresponding to the target page;
the rendering sequence determining module is used for determining the rendering sequence corresponding to each of the network object to be rendered and the page element object to be rendered according to the position information;
and the rendering module is used for rendering the network object to be rendered and the page element object to be rendered in the target page according to the rendering sequence to obtain the rendered target page element object.
12. An electronic device for object rendering, characterized in that the electronic device comprises a processor and a memory, in which at least one instruction or at least one program is stored, which is loaded and executed by the processor to implement the object rendering method according to any of claims 1 to 10.
13. A computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the object rendering method of any one of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the object rendering method of any one of claims 1 to 10.
CN202210394558.7A 2022-04-14 2022-04-14 Object rendering method and device, electronic equipment and storage medium Pending CN116958389A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210394558.7A CN116958389A (en) 2022-04-14 2022-04-14 Object rendering method and device, electronic equipment and storage medium
PCT/CN2023/074536 WO2023197729A1 (en) 2022-04-14 2023-02-06 Object rendering method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210394558.7A CN116958389A (en) 2022-04-14 2022-04-14 Object rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116958389A true CN116958389A (en) 2023-10-27

Family

ID=88328786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210394558.7A Pending CN116958389A (en) 2022-04-14 2022-04-14 Object rendering method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN116958389A (en)
WO (1) WO2023197729A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201516880D0 (en) * 2015-09-23 2015-11-04 Pixelrights Ltd Secure distribution of an image
CN108230436A (en) * 2017-12-11 2018-06-29 网易(杭州)网络有限公司 The rendering intent of virtual resource object in three-dimensional scenic
CN107918949A (en) * 2017-12-11 2018-04-17 网易(杭州)网络有限公司 Rendering intent, storage medium, processor and the terminal of virtual resource object
CN108196835A (en) * 2018-01-29 2018-06-22 东北大学 Pel storage and the method rendered in a kind of game engine
CN111476870B (en) * 2020-02-29 2022-08-30 新华三大数据技术有限公司 Object rendering method and device
CN112287258A (en) * 2020-09-25 2021-01-29 长沙市到家悠享网络科技有限公司 Page rendering method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2023197729A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
CN109260708B (en) Map rendering method and device and computer equipment
CN107358649B (en) Processing method and device of terrain file
CN103970518B (en) A kind of the 3D rendering method and device of window logic
MXPA06012368A (en) Integration of three dimensional scene hierarchy into two dimensional compositing system.
CN108765520B (en) Text information rendering method and device, storage medium and electronic device
CN108537891A (en) The method that three-dimensional material and textures data are automatically switched to UE4
CN112717414A (en) Game scene editing method and device, electronic equipment and storage medium
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN110619683A (en) Three-dimensional model adjusting method and device, terminal equipment and storage medium
CN112734896A (en) Environment shielding rendering method and device, storage medium and electronic equipment
CN110502305B (en) Method and device for realizing dynamic interface and related equipment
CN113282741A (en) Knowledge graph visualization method, device, equipment and computer readable medium
CN117078888A (en) Virtual character clothing generation method and device, medium and electronic equipment
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN113419806B (en) Image processing method, device, computer equipment and storage medium
CN108553900B (en) Method capable of being repeatedly used by overlapping storage based on UE engine
CN116958389A (en) Object rendering method and device, electronic equipment and storage medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
EP4231243A1 (en) Data storage management method, object rendering method, and device
US11625900B2 (en) Broker for instancing
CN114627225A (en) Method and device for rendering graphics and storage medium
CN112348965A (en) Imaging method, imaging device, electronic equipment and readable storage medium
CN113487708B (en) Flow animation implementation method based on graphics, storage medium and terminal equipment
EP4258218A1 (en) Rendering method, device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination