CN117011492A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117011492A
CN117011492A CN202311200361.6A CN202311200361A CN117011492A CN 117011492 A CN117011492 A CN 117011492A CN 202311200361 A CN202311200361 A CN 202311200361A CN 117011492 A CN117011492 A CN 117011492A
Authority
CN
China
Prior art keywords
rendering
grid
vertex
target
style information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311200361.6A
Other languages
Chinese (zh)
Other versions
CN117011492B (en
Inventor
任亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311200361.6A priority Critical patent/CN117011492B/en
Publication of CN117011492A publication Critical patent/CN117011492A/en
Application granted granted Critical
Publication of CN117011492B publication Critical patent/CN117011492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application discloses an image rendering method, an image rendering device, electronic equipment and a storage medium, wherein vertex identifications corresponding to rendering style information are generated, the vertex identifications are added to the rendering style information and vertex attributes of associated grids to be rendered, vertices corresponding to the rendering style information are marked through the vertex identifications, a plurality of grids to be rendered are combined to obtain at least one target grid, and then the target grid and each rendering style information are sent to a graphic processor so that the graphic processor screens out target pixel fragments corresponding to each rendering style information according to the vertex identifications to render, the number of grids to be rendered is effectively reduced on the basis of realizing independent rendering, the interaction frequency with the graphic processor is reduced, the image rendering blocking phenomenon is optimized, the image rendering smoothness is improved, and the image rendering method and the image rendering device can be widely applied to scenes such as map, navigation, intelligent traffic, auxiliary driving, video making, virtual reality and the like.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering method, an image rendering device, an electronic device, and a storage medium.
Background
With the development of computer technology, image rendering has become one of the key technologies in many fields. For example, in a usage scene of a map, a map interface is typically rendered for display based on grid data. In the related art, when the image rendering with higher precision is performed, as the number of elements to be rendered is more, the number of grids is correspondingly more, and meanwhile, the number of grids is multiplied along with the service development, so that the image rendering is blocked, and the smoothness of the image rendering is reduced.
Disclosure of Invention
The following is a summary of the subject matter of the detailed description of the application. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides an image rendering method, an image rendering device, electronic equipment and a storage medium, which can improve the fluency of image rendering.
In one aspect, an embodiment of the present application provides an image rendering method, including:
acquiring a plurality of grids to be rendered and rendering style information associated with each grid to be rendered;
generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and adding the vertex identifications to the vertex attributes of the associated grids to be rendered;
Merging the grids to be rendered to obtain at least one target grid, and configuring the target grid into a grid commonly referenced by the corresponding rendering style information;
and sending the target grid and each piece of rendering style information to a graphics processor for the graphics processor to screen out target pixel fragments corresponding to each piece of rendering style information according to the vertex identifications for rendering, wherein the target pixel fragments are obtained based on target vertex conversion in the target grid, and the target vertex and the rendering style information have the same vertex identifications.
In another aspect, an embodiment of the present application provides an image rendering method, including:
obtaining a target grid and a plurality of rendering style information, wherein the target grid is obtained by combining grids to be rendered associated with each rendering style information, vertex attributes of the grids to be rendered and the rendering style information both comprise vertex identifications, and the target grid is configured as a grid commonly referred by the rendering style information;
and screening out target pixel fragments corresponding to the rendering style information according to the vertex identifications, and rendering, wherein the target pixel fragments are obtained by converting target vertices in the target grid, and the target vertices and the rendering style information have the same vertex identifications.
On the other hand, the embodiment of the application also provides an image rendering device, which comprises:
an information acquisition module: the method comprises the steps of acquiring a plurality of grids to be rendered and rendering style information associated with each grid to be rendered;
the vertex identification acquisition module: the vertex identification is used for generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and adding the vertex identifications to the vertex attributes of the associated grids to be rendered;
and a grid merging module: the method comprises the steps of merging a plurality of grids to be rendered to obtain at least one target grid, and configuring the target grid into a grid commonly referenced by corresponding rendering style information;
a rendering information sending module: and the graphics processor is used for screening out target pixel fragments corresponding to each rendering style information according to the vertex identifications and rendering the target pixel fragments, wherein the target pixel fragments are obtained by converting target vertices in the target grids, and the target vertices and the rendering style information have the same vertex identifications.
Further, the mesh merging module is specifically configured to:
generating a first grid identification of the target grid, wherein the first grid identification is used for marking different target grids;
And respectively taking each target grid as a rendering batch, respectively reading the target grids from the grid cache area according to the first grid identification for each rendering style information corresponding to the target grids in the current rendering batch, and configuring the read target grids as grids commonly referenced by the rendering style information.
Further, rendering style information is configured as a style object, and the mesh merging module 1703 is further configured to:
merging a plurality of style objects corresponding to the same target grid into a style object list;
and constructing batch objects of the rendering batch where the target grid is positioned according to the first grid identification, the target grid and the style object list.
Further, the above mesh merging module is further configured to:
traversing a style object list in the batch objects to obtain a current style object in the style object list;
creating a scene node object for the current style object, creating a material object according to the style object, binding the material object with the scene node object, and adding a vertex mark in the style object to the material object;
reading a target grid from a grid cache area according to the first grid identification, and configuring the read target grid as a grid commonly referenced by scene node objects
Further, the target grid is obtained by combining a plurality of grids to be rendered with the same grid type, the grid type is used for indicating the type of an element modeled by the grids to be rendered, and the grid combining module is further used for:
acquiring a grid type of a grid to be rendered and a style type of rendering style information;
and splicing the grid type and the style type to obtain a first grid identification of the target grid.
Further, the above mesh merging module is further configured to:
and when the target grid cannot be read from the grid cache area according to the first grid identification, writing the first grid identification and the target grid association into the grid cache area.
Further, the vertex identification obtaining module is further configured to:
when the same grid to be rendered is respectively associated with different rendering style information, acquiring a second grid identifier of the current grid to be rendered, and acquiring vertex identifiers corresponding to the associated rendering style information according to the second grid identifier, wherein the second grid identifier is used for marking the different grids to be rendered;
when the same rendering style information is respectively associated with different grids to be rendered, style identifiers of the rendering style information are obtained, the style identifiers are used as vertex identifiers corresponding to the rendering style information, and the style identifiers are used for marking different rendering style information;
When different grids to be rendered are respectively associated with different rendering style information, a second grid identifier is obtained, and a vertex identifier corresponding to the associated rendering style information is obtained according to the second grid identifier, or a style identifier is obtained, and the style identifier is used as the vertex identifier corresponding to the rendering style information.
Further, the mesh to be rendered is divided into a plurality of sub-meshes, each sub-mesh is associated with different rendering style information, and the vertex identification acquisition module is further configured to:
acquiring a sub-grid identification for marking the sub-grid;
and splicing the second grid mark and the sub-grid mark to obtain vertex marks corresponding to the associated rendering style information.
On the other hand, the embodiment of the application also provides an image rendering device, which comprises:
rendering information receiving module: the method comprises the steps of acquiring a target grid and a plurality of pieces of rendering style information, wherein the target grid is obtained by combining grids to be rendered which are associated with each piece of rendering style information, vertex attributes of the grids to be rendered and the rendering style information comprise vertex identifiers, and the target grid is configured as a grid commonly referred by the rendering style information;
and a rendering module: and the target pixel fragments are obtained by converting target vertexes in the target grid, and the target vertexes and the rendering style information have the same vertex identification.
Further, the rendering module is specifically configured to:
rasterizing each vertex in the target grid to obtain a plurality of candidate pixel fragments, wherein the fragment attribute of the candidate pixel fragment comprises fragment identifications, and the fragment identifications are obtained by interpolation of vertex identifications in corresponding vertices;
and carrying out consistency matching on the segment identification and the vertex identification in the current rendering style information, and when the matching result is consistent, determining the candidate pixel segment as a target pixel segment and rendering the target pixel segment.
Further, the rendering module is further configured to:
creating a global variable in a rendering pipeline, acquiring a vertex identifier from current rendering style information, and assigning the global variable as the vertex identifier;
traversing a plurality of candidate pixel fragments, and carrying out consistency matching on the fragment identification of the current candidate pixel fragment and the global variable.
Further, the rendering module is further configured to:
when the matching result is inconsistent, calling a discarding function in the fragment shader, and cutting the candidate pixel fragments based on the discarding function;
or when the matching result is inconsistent, setting the transparency of the candidate pixel fragments to a preset value.
On the other hand, the embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the image rendering method when executing the computer program.
In another aspect, an embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program is executed by a processor to implement the above-mentioned image rendering method.
In another aspect, embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium. A processor of a computer device reads the computer program from a computer-readable storage medium, and the processor executes the computer program so that the computer device performs the image rendering method described above.
The embodiment of the application at least comprises the following beneficial effects: the method comprises the steps of obtaining a plurality of grids to be rendered and rendering style information associated with each grid to be rendered, generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and vertex attributes of the associated grids to be rendered, marking vertices corresponding to the rendering style information through the vertex identifications, merging the grids to be rendered to obtain at least one target grid, configuring the target grid as grids commonly referred to by the corresponding rendering style information, effectively reducing the number of grids, sending the target grids and each rendering style information to a graphic processor, so that the graphic processor screens out target pixel fragments corresponding to each rendering style information according to the vertex identifications to render, and therefore, even if the grids to be rendered are merged into the target grid, the graphic processor can split the target grids through the vertex identifications to screen out the target pixel fragments corresponding to each rendering style information to render, and therefore, on the basis of independently rendering each grid to be rendered, the number of grids is effectively reduced, the interaction frequency between the grids and the graphic processor is reduced, the image is optimized, and the image rendering smoothness is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and do not limit the application.
Fig. 1 is an application schematic diagram of an embodiment of the present application in a high-precision map making scene.
Fig. 2 is an application diagram of rendering a scene on a game screen according to an embodiment of the present application.
FIG. 3 is a schematic diagram of an alternative implementation environment provided by an embodiment of the present application.
Fig. 4 is an optional flowchart of an image rendering method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a mesh to be rendered according to an embodiment of the present application.
Fig. 6 is an alternative schematic diagram of a mesh to be rendered according to an embodiment of the present application.
Fig. 7 is another alternative schematic diagram of a mesh to be rendered according to an embodiment of the present application.
Fig. 8 is a schematic diagram of association between a mesh to be rendered and rendering style information according to an embodiment of the present application.
Fig. 9 is a schematic flow chart of configuring a target grid as a grid commonly referenced by a scene node object according to an embodiment of the present application.
Fig. 10 is a schematic diagram of a scene node object of a target mesh in an embodiment of the application.
Fig. 11 is an optional flowchart of an image rendering method according to an embodiment of the present application.
Fig. 12 is a rendering diagram according to an embodiment of the present application.
FIG. 13 is another alternative rendering schematic provided by an embodiment of the present application.
Fig. 14 is a schematic view of another alternative rendering provided in an embodiment of the present application.
Fig. 15 is a schematic diagram of a system logic architecture according to an embodiment of the present application.
Fig. 16 is an overall process flow diagram of an image rendering method according to an embodiment of the present application.
Fig. 17 is a schematic diagram of an alternative structure of an image rendering apparatus according to an embodiment of the present application.
Fig. 18 is a schematic diagram of another alternative structure of an image rendering apparatus according to an embodiment of the present application.
Fig. 19 is a partial block diagram of a terminal according to an embodiment of the present application.
Fig. 20 is a partial block diagram of a server according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the embodiments of the present application, when related processing is performed according to data related to characteristics of a target object, such as attribute information or attribute information set of the target object, permission or consent of the target object is obtained first, and related laws and regulations and standards are complied with for collection, use, processing, etc. of the data. Wherein the target object may be a user. In addition, when the embodiment of the application needs to acquire the attribute information of the target object, the independent permission or independent consent of the target object is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the target object is explicitly acquired, the necessary target object related data for enabling the embodiment of the application to normally operate is acquired.
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, some key terms used in the embodiments of the present application are explained here:
and (3) image rendering: refers to a process of converting three-dimensional scene or graphic data into a two-dimensional image. In computer graphics, image rendering involves computing and simulating geometric shapes, illumination, materials, etc. in a scene, ultimately producing a final image result. The main goal of image rendering is to generate a realistic two-dimensional image with effects of shadows, reflections, refraction, textures, etc., so as to simulate the illumination and material properties of the real world, and the image rendering method is used in the fields of visual presentation, animation production, etc.
Unity: is a cross-platform game engine and development tool, provides a visual development environment, and allows a developer to create elements such as game scenes, roles, animations, special effects and the like through dragging and dropping and writing scripts. While supporting multiple platforms, including Windows, macOS, linux, iOS, android, etc., to enable developers to publish games to different operating systems and devices. The method can be used for creating two-dimensional and three-dimensional games and interactive application programs, and is widely applied to the fields of virtual reality, augmented reality, simulation and the like.
Shader (Shader) is a special program in computer graphics that describes and controls the effects of lighting, shading, materials, etc. in a graphics rendering pipeline. It is a program running on a graphics processor (Graphics Processing Unit, GPU) responsible for computing and processing the attributes and appearance of each pixel or vertex of the graphics. In the graphics rendering process, a loader may define the visual effects of color, texture, transparency, reflection, refraction, etc. of an object, and how it is affected by illumination. By writing the loader code, a developer can customize the rendering effect and realize various unique visual effects, thereby making the game or application more realistic, artistic or unique.
With the development of computer technology, image rendering has become one of the key technologies in many fields. For example, in a usage scene of a map, a map interface is typically rendered for display based on grid data. In the related art, when the image rendering with higher precision is performed, as the number of elements to be rendered is more, the number of grids is correspondingly more, and meanwhile, the number of grids is multiplied along with the service development, so that the image rendering is blocked, and the smoothness of the image rendering is reduced.
Based on the above, the embodiment of the application provides an image rendering method, an image rendering device, electronic equipment and a storage medium, which can improve the smoothness of image rendering.
The method provided by the embodiment of the application can be applied to various scenes, including but not limited to map, navigation, intelligent traffic, driving assistance, film and television production, virtual reality and other scenes.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application of an embodiment of the present application to a high-precision mapping scene. In the high-precision map making scene, the image rendering method provided by the embodiment of the application is used for performing rendering processes such as terrain rendering, vegetation rendering, water body rendering, building and structure rendering, shadow and illumination rendering, special effect rendering and the like. The terrain rendering is to present the terrain features of the map by using proper terrain texture, elevation data and coloring technology, wherein the terrain includes mountains, rivers, lakes, plains and the like. The vegetation rendering is to render vegetation elements such as trees, grasslands, shrubs and the like in the map by adding vegetation textures and models, so that the natural feeling and the ecological environment of the high-precision map are increased. The water body rendering is to use proper coloring and reflecting technology to realize the rendering of the water body with sense of reality for the water bodies of the sea, the lake, the river and the like, so that the water body in the high-precision map is more textured and vivid. Building and structure rendering is to render the appearance and details of the building by using appropriate textures and lighting effects, making it more realistic in high-precision maps. Shadow and illumination rendering is to create dynamic shadow effects in the map by simulating solar light illumination, and increase the depth and realism of the scene. The special effect rendering is to apply various special effects, such as rain, snow, fire and the like to enhance the immersion of the map. Therefore, the image rendering method provided by the embodiment of the application can be suitable for map making scenes, geographical information and environmental characteristics are displayed to the object in a visual mode, and a vivid and vivid high-precision map is generated.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an application of an embodiment of the present application to rendering a scene on a game screen. The image rendering method of the embodiment of the application can also be used for rendering scenes on game pictures, and presenting virtual scenes, roles and special effects in games to target objects in a realistic manner. Wherein the game screen rendering process comprises: rendering the terrains of the terrains features such as mountains, rivers, forests and the like; game character rendering using character model, texture map and skeletal animation techniques; the method is used for creating a scene rendering of real illumination change and scene depth sense by simulating different types of light sources (such as sun, bulb and the like) and shadow casting; by using the technologies of a particle system, a shader, a map animation and the like, the special effect rendering of various dynamic special effects including flame, smoke, water simulation and the like is realized; by using high resolution texture maps and suitable material properties (e.g., metal, glass, wood), texture and material rendering of detail and realism of the game scene is increased; and adding effects such as color correction, depth of field, motion blur and the like to the game so as to promote visual effect, post-rendering of artistic expression and the like. The image rendering method provided by the embodiment of the application can create expected visual effects in the game picture rendering process, and improves the immersion and experience of the game.
Referring to fig. 3, fig. 3 is a schematic diagram of an alternative implementation environment provided by an embodiment of the present application, where the implementation environment includes a terminal 301 and a data processing server 302, where the terminal 301 and the data processing server 302 are connected through a communication network.
Taking a high-precision map making scene as an example, the terminal 301 may be an on-vehicle terminal, the data processing server 302 may obtain a plurality of grids to be rendered and rendering style information associated with each grid to be rendered, the central processor of the data processing server 302 generates vertex identifiers corresponding to the rendering style information, adds the vertex identifiers to the rendering style information and vertex attributes of the associated grids to be rendered, merges the grids to be rendered to obtain at least one target grid, configures the target grid into grids commonly referred to by the corresponding rendering style information, and sends the target grid and each rendering style information to the graphics processor of the data processing server 302 for the graphics processor to screen out target pixel fragments corresponding to each rendering style information according to the vertex identifiers for rendering, wherein the target pixel fragments are obtained based on target vertex conversion in the target grid, the target vertex and the rendering style information have the same vertex identifiers, and finally the data processing server 302 sends the map data obtained by rendering to the terminal 301 for displaying.
In addition, the data processing server 302 may also send a plurality of grids to be rendered and rendering style information associated with each grid to be rendered to the terminal 301, where the central processor of the terminal 301 generates a vertex identifier corresponding to the rendering style information, adds the vertex identifier to the rendering style information and adds the vertex identifier to the vertex attribute of the associated grid to be rendered, merges the plurality of grids to be rendered to obtain at least one target grid, configures the target grid as a grid commonly referred to by the corresponding rendering style information, and sends the target grid and each rendering style information to the graphics processor of the terminal 301, so that the graphics processor screens out target pixel segments corresponding to each rendering style information according to the vertex identifier for rendering and displaying.
Accordingly, the terminal 301 may display the map data obtained by rendering in the map product navigation interface, enhance the visual effect of the navigation interface, help the driving object to better perceive the surrounding road environment, for example, clearly see the detailed information of the road surface condition, the street layout, the intersection and the like, and improve the knowledge and reaction capability of the driving object to the driving environment, thereby reducing the accident occurrence probability and improving the driving safety. The map products can be various map products such as high-precision virtual maps, common-precision maps, urban road models and the like.
The terminal 301 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart home appliance, a vehicle-mounted terminal, etc. The terminal 301 and the data processing server 302 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application.
The data processing server 302 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like. In addition, data processing server 302 may also be a node server in a blockchain network.
The principle of the image rendering method provided by the embodiment of the present application is described in detail below.
Referring to fig. 4, fig. 4 is an optional flowchart of an image rendering method provided in an embodiment of the present application, where the image rendering method is applied to a central processing unit, and may be executed by a server, or may be executed by a terminal, or may be executed by a server and a terminal in cooperation, and in the embodiment of the present application, the method is described by way of example as being executed by the server. The image rendering method includes, but is not limited to, the following steps 401 to 404.
Step 401: and acquiring a plurality of grids to be rendered and rendering style information associated with each grid to be rendered.
After a three-dimensional model is established according to service requirements, a grid is used for representing the basic geometric structure of a three-dimensional object or surface, the grid consists of vertexes, edges and faces, and the appearance and the shape of the object are described by connecting the vertexes, the edges and the faces. Each vertex in the mesh has its coordinates in three-dimensional space, an edge describes the connection between two vertices, and a face is a planar area bounded by the edge. Wherein the meshes have different topologies, such as triangular meshes and quadrilateral meshes. In a triangular mesh, each face consists of three sides, while a quadrilateral mesh consists of four sides. Triangular meshes are generally the most common and widely used type of mesh because of their simpler structure and better computational performance. By performing operations such as vertex adjustment, patch subdivision, texture mapping and the like on the grid, a realistic three-dimensional image can be created, and illumination, shading, texture and the like are processed in a rendering engine. In this embodiment, one or more unrendered grids are obtained as grids to be rendered according to service rendering requirements, and then a subsequent rendering process is performed according to specific service requirements.
The rendering style information associated with the mesh to be rendered refers to related attribute parameters such as appearance, effect and the like executed for the mesh to be rendered according to service requirements, and is used for describing a style of the mesh to be rendered in the rendered image. Rendering style information in this embodiment includes parameters such as color and texture properties, illumination properties, transparency properties, reflection properties, and the like. The color and material properties are used to define the surface color, reflectivity and glossiness of the object, and specifically, the color and material properties may be different RGB values or material maps for achieving different appearance effects, such as metal, plastic, wood grain, and the like. Texture attributes are used to define texture effects including texture mapping, normal mapping, bump mapping, etc., and image patches are mapped to the surface of an object using different texture effects, making the object look more realistic, with details and complexity. The illumination and shadow attributes are used for defining the type, color, intensity and direction of illumination and the generated shadow effect, and the perceived shape and brightness of the object can be changed by combining the illumination and shadow attributes, so that a more realistic scene rendering effect is provided. The transparency attribute is used to control the transparency and translucence of an object so that it can display a portion or all of the object behind it, and is important in rendering glass, liquid, and smoke, etc. scenes. The reflection properties are used to define how the object surface reflects light and generates color, and this parameter is used to set the different algorithms and parameters used in calculating illumination. It will be appreciated that the selection and application of rendering style information depends on the particular application scenario and business requirements. In addition, the grid to be rendered can be sequentially associated with a plurality of different rendering style information, and various visual effects are obtained by adjusting and combining the rendering style information, so that the appearance of the rendering object is accurately controlled.
The steps are that the mesh to be rendered is obtained, and meanwhile, the rendering style information associated with the mesh to be rendered is obtained.
Step 402: generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information, and adding the vertex identifications to vertex attributes of the associated grids to be rendered.
In order to correlate the relationship between the rendering style information and the grid to be rendered, vertex identifications are generated for each rendering style information, and the vertex identifications are correlated with the rendering style information and the grid to be rendered, so that the unique relationship between the rendering style information and the grid to be rendered is ensured, and collision or confusion in the rendering process is avoided. And by using the vertex identifications, the needed style of the grid to be rendered can be obtained according to the rendering style information corresponding to the vertex identifications in different rendering scenes, so that personalized and customized rendering effects are realized. In addition, a rendering style information list may be constructed for storing the correspondence of rendering style information and vertex identifications, and sharing, multiplexing, and combining the rendering style information as needed. The format of the vertex mark can be set according to the actual service requirement. If new rendering style information exists, generating a new vertex mark according to a format agreed by the service requirement, associating the new vertex mark with the new rendering style information, and storing the association relation between the new vertex mark and the new rendering style information in a rendering style information list.
In one possible implementation, the grids to be rendered may have the same or different rendering requirements. At this time, vertex identifications corresponding to the rendering style information need to be generated according to different rendering requirements. The vertex identification corresponding to the rendering style information can be obtained according to the second grid identification of the grid to be rendered or the style identification corresponding to the rendering style information. The process of generating vertex identifications corresponding to rendering style information specifically includes the steps of: 1) When the same grid to be rendered is respectively associated with different rendering style information, acquiring a second grid identifier of the current grid to be rendered, and acquiring vertex identifiers corresponding to the associated rendering style information according to the second grid identifier, wherein the second grid identifier is used for marking the different grids to be rendered; 2) When the same rendering style information is respectively associated with different grids to be rendered, style identifiers of the rendering style information are obtained, the style identifiers are used as vertex identifiers corresponding to the rendering style information, and the style identifiers are used for marking different rendering style information; 3) When different grids to be rendered are respectively associated with different rendering style information, a second grid identifier is obtained, and a vertex identifier corresponding to the associated rendering style information is obtained according to the second grid identifier, or a style identifier is obtained, and the style identifier is used as the vertex identifier corresponding to the rendering style information.
The requirements of different grids to be rendered are three, and the first case is: the same grid to be rendered is associated with a plurality of different rendering style information, a second grid identifier for marking the grid to be rendered is obtained at the moment, and vertex identifiers of the plurality of different rendering style information are set to be the second grid identifier, so that the grid to be rendered and the corresponding plurality of rendering style information are corresponding by utilizing the second network identifier. The second case is: the multiple different grids to be rendered are associated with the same rendering style information, at this time, the vertex mark of the rendering style information is the style mark of the rendering style information, and the style mark can be used for distinguishing different rendering style information. The third case is: different grids to be rendered are respectively associated with different rendering style information, and at the moment, the vertex mark can be a second grid mark corresponding to the grids to be rendered or a style mark corresponding to the rendering style information. In summary, the three ways can effectively realize the association between the mesh to be rendered and the rendering style information, so that different style information can be applied according to an expected way in the rendering process.
In one possible implementation, vertices in the mesh to be rendered have corresponding vertex attributes, such as position, normal, color, texture coordinates, tangent and minor tangent, and the like. Wherein the position represents the spatial coordinates of the vertex, and is used for determining the position of the vertex in the three-dimensional space. The normal is a vector perpendicular to the mesh patch describing the direction and illumination information of the mesh patch where the vertex is located. The normals are used to calculate lighting and shading effects. The colors are used to color the vertices during the rendering process so that they appear different colors on the screen. The texture coordinates are used for mapping the texture to the vertices, and the texture coordinates are two-dimensional coordinates, specify the positions of the vertices in the texture image, and are used for achieving the mapping effect. The tangent line and the secondary tangent line are used for auxiliary attributes when normal mapping and texture mapping are performed in the three-dimensional model.
Since the mesh has a topology, the mesh includes a plurality of vertices, and the vertices and edges may form patches, the mesh includes a plurality of triangular patches. Taking a triangular patch as an example, each patch is composed of three vertices, so to represent the positions of the vertices, the vertex attributes also include vertex indices that indicate the positions of the vertices in the vertex list. Specifically, the vertex index is an array of integers, with each element representing the index value of a vertex in the vertex list. From the vertex index, it can be determined which vertices together constitute a patch. For example, if the vertex index is [0,1,2], it means that the 0,1,2 th vertices in the vertex list constitute a patch. In addition, since at least one patch is included in each mesh, the vertex indices in the mesh may constitute a vertex index list. The benefit of using vertex indices is that the overhead of storing and processing vertex data may be reduced. Compared with directly storing the vertex coordinates of each surface patch, the vertex index can share the data of the same vertex, and redundant storage is avoided. This is particularly important in complex models that contain a large number of repeated vertices, which can save storage space and increase rendering efficiency. In addition, vertex indices may be used with vertex attributes. For example, for the attributes such as vertex coordinates, normals, texture coordinates and the like of each patch, corresponding matching can be performed through vertex indexes, so that corresponding attribute data is accurately acquired for each patch.
In one possible implementation manner, the number of grids to be rendered is multiple, each grid to be rendered comprises multiple vertices, and rendering the grids to be rendered by using the rendering style information is to render the vertices by using the rendering style information substantially, so that vertex identifiers corresponding to the rendering style information are added in vertex attributes of the vertices in the grids to be rendered in the embodiment of the application. Referring to fig. 5, fig. 5 is a schematic diagram of a mesh to be rendered according to an embodiment of the present application. For example, the number of meshes to be rendered is 3, which are Mesh1, mesh2 and Mesh3, and 2 vertexes d1 and d2 in Mesh 1; mesh2 has 3 vertices d3, d4, and d5; the Mesh3 has 1 vertex d6 (it should be noted that, for convenience of description, only a small number of vertices are taken to describe the description, in fact, the number of vertices in one Mesh is far greater than that shown in fig. 5), and the rendering style information acquired according to the service requirement includes y1, y2 and y3. Assuming that the rendering style information associated with Mesh1 is y1, the rendering style information associated with Mesh2 is y2, and the rendering style information associated with Mesh3 is y3. At this time, in order to achieve the third aspect of the foregoing embodiment, different grids to be rendered are associated with different rendering style information, and if the vertex identifications are obtained according to the second grid identification of the grids to be rendered, a corresponding vertex identification "Mesh1" is added in the rendering style information y1, a corresponding vertex identification "Mesh2" is added in the rendering style information y2, and a corresponding vertex identification "Mesh3" is added in the rendering style information y3. Meanwhile, a vertex identifier of 'Mesh 1' is added to the vertex attribute of the vertex d1 and the vertex attribute of the vertex d2, a vertex identifier of 'Mesh 2' is added to the vertex attribute of the vertex d3, the vertex attribute of the vertex d4 and the vertex attribute of the vertex d5, and a vertex identifier of 'Mesh 3' is added to the vertex attribute of the vertex d 6. The method can be used for positioning the network to be rendered to which the vertex belongs and the rendering style information associated with the network to be rendered according to the vertex identification.
In one possible implementation, if different areas in the mesh to be rendered have different rendering requirements, the mesh to be rendered may be divided into sub-meshes, and in addition, the sub-meshes may be respectively associated with the same or different rendering style information. Dividing the mesh to be rendered into a plurality of sub-meshes, and obtaining vertex identifications corresponding to the associated rendering style information according to the second mesh identifications, wherein the process specifically comprises the following steps of: and acquiring a sub-grid identifier for marking the sub-grid, and splicing the second grid identifier and the sub-grid identifier to obtain vertex identifiers corresponding to the associated rendering style information.
For example, in a map rendering scene, the mesh to be rendered may include road data, further subdividing the roads into: the method comprises the steps of obtaining vehicle lane related data from road data to serve as a sub-grid, obtaining the vehicle lane related data to serve as a sub-grid and obtaining the pavement related data to serve as a sub-grid.
The second grid identifier and the sub-grid identifier are spliced to obtain vertex identifiers corresponding to vertex attributes, wherein preset symbols can be utilized to splice to obtain splice identifiers, for example, the preset symbols can be: "+", "-", "#", and the like. The "#" symbol is selected for illustration in this embodiment below.
In a possible implementation manner, the sub-grids obtained by dividing the to-be-rendered grids may be different in size, the number of vertices contained in each sub-grid is not limited, and rendering style information associated with each sub-grid may be the same or different. Referring to fig. 6, fig. 6 is a schematic diagram of a mesh to be rendered according to an embodiment of the present application. The number of grids to be rendered is 2, namely Mesh4 and Mesh5, and the rendering style information corresponding to the service requirement comprises y4, y5, y6, y7 and y8. Wherein, mesh4 has 5 vertices d7, d8, d9, d10, and d11; mesh5 has 4 vertices d12, d13, d14, and d15. According to service requirements, the Mesh to be rendered Mesh4 is divided into 3 sub-meshes, wherein the sub-meshes are respectively a sub-Mesh 41, a sub-Mesh 42 and a sub-Mesh 43, vertexes contained in the sub-Mesh 41 are d7 and d8, vertexes contained in the sub-Mesh 42 are d9, and vertexes contained in the sub-Mesh 43 are d10 and d11. Mesh5 to be rendered is divided into 2 sub-meshes, which are sub-Mesh 51 and sub-Mesh 51, respectively, wherein the vertex contained in sub-Mesh 51 is d12, and the vertex contained in sub-Mesh 52 is d13, d14 and d15. Assuming that the rendering style information associated with the sub-mesh 41 is y4, the rendering style information associated with the sub-mesh 42 is y5, the rendering style information associated with the sub-mesh 43 is y6, the rendering style information associated with the sub-mesh 51 is y7, and the rendering style information associated with the sub-mesh 52 is y8.
Assuming that, in fig. 6, the second Mesh identifiers corresponding to meshes 4 and 5 to be rendered are "Mesh4" and "Mesh5", the sub-Mesh identifiers corresponding to sub-meshes sub-Mesh 41, sub-Mesh 42 and sub-Mesh 43 are "sub-Mesh 41", "sub-Mesh 42" and "sub-Mesh 43", and the sub-Mesh identifiers corresponding to sub-meshes sub-Mesh 51 and sub-Mesh 52 are "sub-Mesh 51" and "sub-Mesh 52", respectively, after the second Mesh identifiers and the sub-Mesh identifiers are spliced, the splicing identifiers corresponding to sub-meshes sub-Mesh 41, sub-Mesh 42 and sub-Mesh 43 are respectively: "Mesh4# SubMesh41", "Mesh4# SubMesh42", and "Mesh4# SubMesh43", the splice identifications corresponding to the subnetworks SubMesh51 and SubMesh52, respectively, are: "Mesh5# subflash 51" and "Mesh5# subflash 52".
Therefore, the vertex corresponding to the rendering style information y4 is identified as "Mesh4# sub-Mesh 41", the vertex corresponding to the rendering style information y5 is identified as "Mesh4# sub-Mesh 42", the vertex corresponding to the rendering style information y6 is identified as "Mesh4# sub-Mesh 43", the vertex corresponding to the rendering style information y7 is identified as "Mesh5# sub-Mesh 51", and the vertex corresponding to the rendering style information y8 is identified as "Mesh5# sub-Mesh 52". At this time, the vertex identifier "Mesh4# submash 41" is added to the vertex attribute of the vertex d7 and the vertex attribute of the vertex d8 of Mesh4, the vertex identifier "Mesh4# submash 42" is added to the vertex attribute of the vertex d9, the vertex identifier "Mesh4# submash 43" is added to the vertex attribute of the vertex d10 and the vertex attribute of the vertex d11, the vertex identifier "Mesh5# submash 51" is added to the vertex attribute of the vertex d12, and the vertex identifiers "Mesh5# submash 52" are added to the vertex attributes of the vertex d13, the vertex d14, and the vertex d 15.
In addition, if different sub-grids need to set the same rendering style information, a style identification of the rendering style information is acquired, the style identification is used as a vertex identification corresponding to the rendering style information, and the style identification is added to vertex attributes of vertices in the sub-grids.
In the above embodiment, if the mesh to be rendered is divided into the sub-meshes, the vertex identifications of the rendering style information associated with the sub-meshes are spliced by the second mesh identifications and the sub-mesh identifications, and the vertex identifications of the meshes to be rendered without dividing the sub-meshes may be obtained according to the second mesh identifications of the meshes to be rendered. In addition, not all the grids to be rendered need to be divided into sub-grids, and only a part of the sub-grids can be selected according to service requirements. Referring to fig. 7, fig. 7 is a schematic diagram of a mesh to be rendered according to an embodiment of the present application. In fig. 7, 5 grids to be rendered are illustrated, only the third of which has a corresponding sub-grid. Therefore, the vertex identifications of the sub-grid associated rendering style information in the third grid to be rendered are spliced by the second grid identifications corresponding to the grid to be rendered and the sub-grid identifications corresponding to the sub-grids, and the vertex identifications of the other sub-grid associated rendering style information are spliced by the second grid identifications corresponding to the grid to be rendered.
In the embodiment of the application, the association between the grid to be rendered and the rendering style information is set according to the actual service requirement. The same rendering style information may relate to different grids to be rendered, or the same rendering style information may relate to sub-grids in different grids to be rendered, or the same rendering style information may relate to middle molecular grids of different grids to be rendered, and other grids to be rendered. Referring to fig. 8, fig. 8 is a schematic diagram illustrating association between a mesh to be rendered and rendering style information according to an embodiment of the present application. In fig. 8, 5 meshes to be rendered are illustrated, mesh6, mesh7, mesh8, mesh9, and Mesh10, respectively, wherein Mesh6 includes 2 sub-meshes, respectively, sub-Mesh 61 and sub-Mesh 62, mesh8 includes 2 sub-meshes, respectively, sub-Mesh 81, sub-Mesh 82, and rendering style information includes y9, y10, and y11. Wherein, the subtank 81 of Mesh7, mesh8 and the subtank 62 of Mesh6 are associated with rendering style information y9; the subtank 61 of Mesh6 and the subtank 82 of Mesh8 are associated with rendering style information y10; mesh9 and Mesh10 are associated with rendering style information y11.
According to the method for setting the vertex identifications in the above-described embodiment, since the rendering pattern information y9 simultaneously associates a mesh to be rendered and a plurality of sub-meshes of other meshes to be rendered, the pattern identifications of the rendering pattern information y9 are utilized as the vertex identifications, and likewise, the rendering pattern information y10 and the rendering pattern information y11 also utilize the pattern identifications as the vertex identifications. If the style of the rendering style information y9 is identified as "y9", the style of the rendering style information y10 is identified as "y10", and the style of the rendering style information y11 is identified as "y11". At this time, the vertex identifications of the vertices in Mesh7 and sub-meshes sub-Mesh 81 and sub-Mesh 62 of the associated rendering style information y9 are set to "y9", the vertex identifications of the vertices in sub-meshes sub-Mesh 61 and sub-Mesh 82 of the associated rendering style information y10 are set to "y10", and the vertex identifications of the vertices in Mesh9 and Mesh10 of the associated rendering style information y11 are set to "y11".
Through the embodiment, corresponding vertex identifications are added to the rendering style information and the vertex attributes, and the rendering style information corresponding to each vertex can be marked through the vertex identifications without depending on the grid to be rendered where the vertex identification is located and the index position of the vertex identification in the vertex index list. If the service requirement is updated and the rendering style information of part of vertexes needs to be changed, vertex identifications corresponding to the new rendering style information are acquired, and vertex attributes of the corresponding vertexes are updated by utilizing the vertex identifications. By adding vertex identifications, the flexibility and expansibility of the rendering pipeline can be enhanced, and which vertices to render can be selected according to business needs so as to meet specific rendering requirements. The embodiment of the application adds the vertex mark into the vertex attribute, can provide a more flexible, easy-to-manage and operate data structure of the vertex, and can realize local custom rendering.
Step 403: and merging the multiple grids to be rendered to obtain at least one target grid, and configuring the target grid into a grid commonly referenced by the corresponding rendering style information.
In the process of rendering the Unity, a Main thread (Unity Main) is responsible for processing tasks such as scene updating and rendering, and rendering operations are executed on the Main thread, and specifically include creating, modifying and uploading Mesh data. While the Unity rendering pipeline does not support multithreading to access and modify rendering resources simultaneously. In the rendering process, the main thread needs to perform the various rendering stages in a specific order to ensure a correct rendering result. If other threads modify the rendering resources during this process, the integrity of the rendering pipeline is destroyed, resulting in rendering errors. To avoid thread conflict errors, unity requires that all rendering-related operations must be performed on the main thread and in a synchronized manner. This ensures proper use of rendering resources in the rendering pipeline and keeps thread safe. It can be understood that when the Unity Main thread uploads in a synchronous manner, the thread is blocked due to too frequent uploading, and when the thread is rendered, the phenomenon that the rendering performance is affected due to too many uploaded Mesh is avoided.
Therefore, in the embodiment of the application, the grids to be rendered are combined according to the grid types, and the grids to be rendered corresponding to the same grid type are combined together, so that the target grids corresponding to each type are obtained, and the number of grids is effectively reduced. Wherein the mesh type is used to indicate the type of element modeled by the mesh to be rendered. The types of the elements can be set according to specific application scenes, for example, the grid types in the terrain modeling scene comprise terrains in natural environments such as earth surfaces, mountains, hills and the like; the grid type in the building modeling scene comprises artificial structures such as houses, cities, buildings and the like; grid types in a persona modeling scene include human, biological, or virtual roles; the grid type in the object modeling scene comprises solid objects such as furniture, vehicles, props and the like; the grid type in the vegetation modeling scene comprises vegetation in natural environments such as trees, grasslands, plants and the like; the grid type in the road modeling scene comprises traffic routes such as roads, streets, paths and the like; the grid type in the water modeling scene comprises water bodies such as lakes, rivers, oceans and the like; the grid type in the sky modeling scene comprises sky elements such as sky, cloud layers, atmospheric effects and the like; the grid type in the special effect modeling scene comprises special effects such as flame, smoke, explosion and the like; the grid types in the UI interface modeling scene include interface elements for user interactions with the user, such as user interfaces, icons, buttons, and the like.
In one possible implementation, configuring the target mesh as a mesh commonly referenced by corresponding rendering style information includes: generating a first grid identification of the target grid, wherein the first grid identification is used for marking different target grids; and then, respectively taking each target grid as a rendering batch, respectively reading the target grids from the grid cache region according to the first grid identification for each rendering style information corresponding to the target grids in the current rendering batch, and configuring the read target grids as grids commonly referenced by the rendering style information.
Wherein, because the target grid is obtained by combining a plurality of grids to be rendered with the same grid type, and the grid type is used for indicating the type of the element modeled by the grids to be rendered, the process of generating the first grid identifier of the target grid specifically comprises the following steps: the method comprises the steps of obtaining a grid type of a grid to be rendered and a style type of rendering style information, and splicing the grid type and the style type to obtain a first grid identification of a target grid.
Illustratively, it is assumed that rendering style information includes parameters of color and texture properties, illumination properties, transparency properties, reflection properties, etc., and thus style types of rendering style information may include: color type, texture type, illumination type, transparency type, reflection type, etc. Assuming that the type of the element modeled by the mesh to be rendered is a road, the mesh type of the mesh to be rendered is a road type, the style type of the rendering style information is a color type, and the color to be rendered is gray. At this time, the grid type and the style type are spliced to obtain a first grid mark of the target grid as follows: road type + grey.
In one possible implementation, each target grid is referred to as a rendering batch: and taking the grids to be rendered corresponding to the same type of first grid identification as a target grid, and taking all the grids to be rendered contained in the target grid as a rendering batch.
Among them, batch processor (batch) is a rendering optimization technique in Unity for merging and reducing the number of rendering calls, thereby improving performance. Rendering calls refer to the process of sending graphics data to be rendered to the GPU for drawing, and the number of such calls may have a large impact on performance. The main role of the batch processor is to reduce the communication overhead between the CPU and GPU by merging rendering calls for multiple grids and textures into fewer batches. Multiple similar grids and materials can be processed simultaneously, and combined into the same batch. Therefore, the cost required by state switching and the times of graphic data transmission can be reduced, and hardware resources are utilized to the greatest extent, so that the efficiency and quality of graphic rendering are improved.
In the embodiment of the application, the batch rendering refers to that the batch processor is utilized to pack grids to be rendered according to target grids, each target grid is used as a batch for rendering, so that the switching is realized as few as possible in the process of rendering, the state switching times and the number of interface calls are minimized, and the rendering performance is improved. In the process of rendering optimization packaging, a batch processor packages a series of vertexes of all grids to be rendered in a target grid and associated rendering style information into a batch, and renders all vertexes in the batch by a method of calling a rendering engine at one time. In addition, the batch processor is used for sequencing the data to be rendered in the target grid, and the rendering sequence of the object is reasonably planned, so that the graphic rendering is smoother.
In one possible implementation, the target grid is configured as a grid commonly referenced by the corresponding rendering style information, which may be that the target grid in the rendering batch is configured as a shared grid by using a batch processor, the target grid is called as a shared grid, the shared grid is a grid shared by all the rendering style information, and the sharing and optimization of the rendering batch are realized by reducing the variability between objects by using the same geometric data. The target grid establishment process in this embodiment includes: for each mesh to be rendered, extracting geometric data in vertex attributes thereof, such as vertexes, normals, texture coordinates and the like, and ensuring that the extracted geometric data are in the same format and layout. And then, for the extracted geometric data, an optimization technology and a compression algorithm, such as vertex buffer optimization, index buffer optimization, vertex position compression and the like, are applied to reduce the size of the data and improve the memory access efficiency, and similar geometric data in the extracted and optimized geometric data are combined to eliminate redundancy. In addition, besides the geometric data, attribute variables of each grid to be rendered, such as material attribute, position, rotation and the like, need to be recorded. And updating attribute variables after establishing the target grids to ensure that the attribute of each grid to be rendered is correctly applied to the target grids. After the obtained target grid is configured as the shared grid of the rendering batch, the attribute variables of the target grid and each rendering style information can be transmitted to a rendering engine of the graphic processor during each rendering, so that each rendering style information is ensured to be correctly rendered to a preset position, and the parallel processing capacity of the graphic processor is fully utilized, thereby improving the rendering performance and efficiency.
In one possible implementation, the rendering style information is configured as a style object, and the process of using each target grid as a rendering batch specifically includes: and combining a plurality of style objects corresponding to the same target grid into a style object list, and then constructing batch objects of the rendering batch where the target grid is positioned according to the first grid identification, the target grid and the style object list.
Wherein Style objects (Style objects) are data structures for specifying styles and properties of a mesh to be rendered, the Style objects in each rendering batch contain various properties to be applied to the mesh to be rendered, such as materials, textures, colors, transparency, etc. In the embodiment of the application, each piece of rendering style information is generated into one style object, and each style object has the vertex identification consistent with the rendering style information because the rendering style information corresponds to the vertex identification. Considering that each rendering batch contains a plurality of grids to be rendered, and each grid to be rendered is associated with at least one rendering style information, for convenience of management, a plurality of style objects corresponding to the same target grid can be managed by using a style object list.
In addition, each rendering batch is generated into a corresponding batch object, and the batch object is responsible for managing and organizing the rendering operation of the rendering batch to be batched, and is submitted to a rendering pipeline once during rendering. In addition, the batch objects maintain and manage rendering states, such as rendering targets, blending modes, illumination settings, etc., in each rendering batch, the batch objects ensure that rendering operations are performed using the same rendering state to avoid unnecessary state switching and overhead. Further, the batch object is also used for adding an interface for rendering operations, executing batch rendering, clearing states and the like, so that the batch processing process is convenient to manage and control by using the interface.
Illustratively, the batch object includes a first grid identification of the grid to be rendered, a target grid and a style object list, wherein the target grid includes a vertex list formed by vertices of all grids to be rendered and an index list formed by vertex indexes of vertices in all grids to be rendered. Thus, referring to table 1, the batch object in the embodiment of the present application includes the following fields:
TABLE 1
In one possible implementation manner, after generating a rendering batch by using a batch processor, in a current rendering batch, for each rendering style information corresponding to a target grid, respectively reading the target grid from a grid cache according to a first grid identifier, and configuring the read target grid as a grid commonly referred to by the rendering style information, where the process specifically includes: traversing a pattern object list in the batch objects, obtaining a current pattern object in the pattern object list, then creating a scene node object for the current pattern object, creating a material object according to the pattern object, binding the material object with the scene node object, adding vertex identifications in the pattern object to the material object, finally reading a target grid from a grid cache area according to a first grid identification, and configuring the read target grid as a grid commonly referred by the scene node object.
The grid cache area refers to an area in the global cache for storing a shared grid corresponding to the target grid. In particular, a global cache is a data structure used to store and share a large number of duplicate or common resources. Under the mechanism, the resources are created and loaded only when needed, and then stored in a global cache for different objects and scenes to avoid performance problems caused by repeated creation and loading of the resources, the use condition of all shared resources can be detected, the cache size can be adjusted according to the needs, the unused resources can be cleared, the shared resources can be effectively managed, the maximum utilization efficiency of the system resources is ensured, the performance and the efficiency of the system are improved, and the method is particularly suitable for scenes requiring frequent loading of the resources.
In the above embodiment, referring to fig. 9, fig. 9 is a schematic flow chart of configuring a target grid as a grid commonly referred to by a scene node object according to an embodiment of the present application. In this embodiment, there are multiple target grids, so the batch processor generates multiple batch objects. Firstly traversing batch objects, selecting one batch object as a current batch object, traversing a style object list in the current batch object, acquiring the style objects in the style object list one by one as the current style object according to the sequence in the style object list, and then creating a scene node object for the current style object.
Wherein, the scene node Object (Game Object) is the most basic building unit in the Unity, represents the entity in the scene, the entity can be a role, an Object, a trigger and the like, and the behavior and the function are given to the entity through the addition of components and the writing of scripts. Through the operation of the Game Object, a developer can create a scene with strong interactivity and rich diversity. In addition, each Game Object may contain a plurality of components (components) that define the behavior and functionality of the Game Object. For example, the Transform component is used to control the position, rotation, and scaling of the Game Object; the render component is for rendering graphics of the Game Object; the Collider assembly is used to handle collision detection and the like. In the development of Unity, the name Object may be controlled and manipulated by writing a script, for example, obtaining a reference to the name Object, and using the corresponding functions and attributes, the developer may change its attributes, trigger an animation, respond to an input, and so on.
Included in the scene node object are a Mesh Filter (Mesh Filter) component and a Mesh Renderer (Mesh Renderer) component. In Unity, a mesh filter component is a component for storing and managing geometric mesh data. Typically used with a mesh renderer component. The mesh filter component is used to store and manage geometric mesh data of scene node objects, including vertex coordinates, triangle index, texture coordinates, normals, and the like. The grid filter component then provides the stored geometric grid data to the grid renderer component for rendering, and the grid renderer component sends the data to the shader according to the grid data provided by the grid filter component, the material and other attributes, and the shader is utilized to render the object on the screen.
Referring to fig. 9, a scene node object is created for a current style object, then a texture object is created from the current style object, the texture object is bound to the scene node object, and a vertex identification in the style object is added to the texture object. The material object is one or more material attributes set by a grid renderer component in the scene node object according to rendering style information corresponding to the style object, and is used for representing a corresponding rendering effect after rendering according to the rendering style information. After the Material object is obtained, the shared Material attribute of the mesh renderer component of the scene node object can be assigned to the Material object to bind the Material object and the scene node object so as to ensure that the Material can be applied to the target mesh. In addition, since the rendering style information and the vertices in the mesh to be rendered need to be selected according to the vertex identifications during rendering, the vertex identifications need to be added to the material object in order to locate the corresponding vertices when rendering the material by using the material object. For example, a material.setvector rrary () method is used to add the vertex identifier to the corresponding material object, so that the vertex identifier can be transferred to a shader during rendering, and the shader can select the vertex corresponding to the material object according to the vertex identifier during shading.
In fig. 9, each style object in the style object list corresponds to a scene node object, and all scene node objects are mounted to the shared grid corresponding to the target grid in a sharing manner. The shared grid corresponding to the target grid is assigned to the shared Mesh attribute of the grid filter component of the scene node object. Multiple scene node objects can share the same target grid, so that memory occupation and rendering call can be reduced, and rendering performance and efficiency are improved. In addition, the embodiment queries whether the target grid exists in the grid cache area according to the first grid identification, if the query result indicates that the target grid can be read from the grid cache area according to the first grid identification, the target grid is read, and the read target grid is configured as the grid commonly referred by the scene node objects. If the query result shows that the target grid cannot be read from the grid cache area according to the first grid identification, the corresponding target grid is created according to the first grid identification, and the association relation between the first grid identification and the target grid is written into the grid cache area. For example, the scene node object of the first pattern object in the pattern object list may not find the target grid in the grid cache, and at this time, an instance of the target grid is created as required, and is added to the grid cache, and then is allocated to the scene node object of the first pattern object. The scene node object of the subsequent style object can directly inquire the corresponding target grid in the grid buffer area.
Referring to fig. 10, fig. 10 is a schematic diagram of a scene node object of a target grid according to an embodiment of the present application. In fig. 10, a corresponding material object is configured under each scene node object, where the material object includes a corresponding vertex identifier, and multiple scene node objects are mounted on the same target grid, where the target grid is commonly referenced by the multiple scene node objects.
In the above process, for the current batch of objects, the scene node object corresponding to each style object is generated one by one, and is mounted in the target grid. Then selecting the next batch object as the current batch object, executing the process to generate a scene node object, and mounting the scene node object in the corresponding target grid.
Step 404: and sending the target grid and each piece of rendering style information to the graphic processor so that the graphic processor screens out target pixel fragments corresponding to each piece of rendering style information according to the vertex identifications for rendering.
The target pixel fragments are obtained based on target vertex conversion in the target grid, and the target vertex and the rendering style information have the same vertex identification.
The grid to be rendered is converted into the target grid through the process of the embodiment, and at this time, the target grid and each rendering style information respectively associated with the grid to be rendered are sent to the graphics processor together. After receiving the target grid and each rendering style information, the graphic processor firstly screens out target vertexes according to vertex identifications, and obtains target pixel fragments based on conversion of the target vertexes in the target grid, and then renders the target pixel fragments.
According to the embodiment of the application, the plurality of grids to be rendered and the rendering style information associated with each grid to be rendered are obtained, the vertex identifications corresponding to the rendering style information are generated, the vertex identifications are added to the rendering style information and the vertex attributes of the associated grids to be rendered, the vertices corresponding to the rendering style information can be marked through the vertex identifications, then the plurality of grids to be rendered are combined to obtain at least one target grid, the target grids are configured to be grids commonly referenced by the corresponding rendering style information, so that the number of grids is effectively reduced, the target grids and each rendering style information are sent to the graphic processor, the graphic processor is used for screening out target pixel fragments corresponding to each rendering style information according to the vertex identifications to render, and therefore, even if the plurality of grids to be rendered are combined to the target grids, the graphic processor can split the target grids through the vertex identifications to screen out the target pixel fragments corresponding to render, the number of the grids is effectively reduced on the basis of realizing independent rendering of each grid to be rendered, the interaction frequency with the graphic processor is reduced, the graphic processor is optimized, and the rendering smoothness of the image is improved.
The following describes a specific rendering process of rendering by the graphics processor according to the received target mesh and each rendering style information in the embodiment of the present application.
Referring to fig. 11, fig. 11 is an optional flowchart of an image rendering method provided in an embodiment of the present application, where the image rendering method is applied to a graphics processor, and may be executed by a server, or may be executed by a terminal, or may be executed by a server and a terminal in cooperation, and in the embodiment of the present application, the method is described as an example by the server. The image rendering method includes, but is not limited to, the following steps 1101 to 1102.
Step 1101: a target mesh and a plurality of rendering style information are acquired.
The target grids are obtained by combining grids to be rendered which are associated with each rendering style information. The grid to be rendered is one or more unrendered grids obtained according to service rendering requirements. The rendering style information associated with the mesh to be rendered refers to related attribute parameters such as appearance, effect and the like executed for the mesh to be rendered according to service requirements, and is used for describing a style of the mesh to be rendered in the rendered image. Rendering style information in this embodiment includes parameters such as color and texture properties, illumination properties, transparency properties, reflection properties, and the like. The target grids are obtained by combining a plurality of grids to be rendered according to grid types, combining the grids to be rendered corresponding to the same grid type, and further obtaining target grids corresponding to each type, wherein the purpose is to reduce the number of grids in the rendering process. In addition, the batch processor is utilized to pack grids to be rendered according to target grids, each target grid is used as a rendering batch for rendering, so that switching is realized as few as possible during rendering, the state switching times and the number of interface calls are minimized, and the rendering performance is improved. In the process of rendering optimization packaging, a batch processor packages a series of vertexes of all grids to be rendered in a target grid and associated rendering style information into one rendering batch. The target mesh is further configured as a mesh commonly referenced by the corresponding rendering style information.
In one possible implementation, the vertex identifications corresponding to the rendering style information include the following: 1) When the same grid to be rendered is respectively associated with different rendering style information, acquiring a second grid identifier of the current grid to be rendered, wherein the second grid identifier is used for marking the different grids to be rendered, and vertex identifiers corresponding to the rendering style information are second grid identifiers; 2) When the same rendering style information is respectively associated with different grids to be rendered, style identifiers of the rendering style information are obtained, the style identifiers are used for marking the different rendering style information, and vertex identifiers corresponding to the rendering style information are style identifiers; 3) When different grids to be rendered are respectively associated with different rendering style information, the vertex mark corresponding to the rendering style information is a second grid mark or a style mark.
The number of the grids to be rendered is multiple, each grid to be rendered comprises multiple vertexes, and rendering of the grids to be rendered by using the rendering style information is performed by using the rendering style information, so that vertex identifications corresponding to the rendering style information are added in vertex attributes of the vertexes in the grids to be rendered. Vertex attributes of vertices in the mesh to be rendered include vertex identifications corresponding to the associated rendering style information. Because the rendering style information and the vertex attributes are added with the corresponding vertex identifications, the rendering style information corresponding to each vertex can be marked through the vertex identifications without depending on the grid to be rendered where the vertex identification is located and the index position of the vertex identification in the vertex index list.
Step 1102: and screening out target pixel fragments corresponding to each rendering style information according to the vertex identification to render.
The target pixel segment is obtained by converting target vertexes in the target grid, and the target vertexes and the rendering style information have the same vertex identification.
Because the vertex attributes of the vertices include vertex identifiers corresponding to the rendering style information, when the graphics processor renders the rendering style information one by one, the graphics processor can screen out the vertices corresponding to each rendering style information as target vertices according to the vertex identifiers, and the target vertices and the rendering style information have the same vertex identifiers. And then converting the target vertex into a target pixel segment, and rendering the target pixel segment. Therefore, even if the graphics processor receives the target grids combined by the multiple grids to be rendered, the graphics processor can split the target grids through vertex identifications during rendering, and screen out target pixel fragments corresponding to each rendering style information, so that the foundation of independently rendering each grid to be rendered is realized, interaction with a large number of grids to be rendered is not needed during rendering, the interaction frequency is reduced, the clamping phenomenon of image rendering can be optimized, and the smoothness of image rendering is improved.
In one possible implementation manner, the process of screening out the target pixel segments corresponding to each rendering style information according to the vertex identification to render specifically includes: and rasterizing each vertex in the target grid to obtain a plurality of candidate pixel fragments, wherein the fragment attributes of the candidate pixel fragments comprise fragment identifications, and the fragment identifications are interpolated by vertex identifications contained in vertex attributes in corresponding vertices. And then, carrying out consistency matching on the segment identification and the vertex identification in the current rendering style information, and when the matching result is consistent, determining the candidate pixel segment as a target pixel segment and rendering the target pixel segment.
The method comprises the steps of selecting current rendering style information, and comparing segment identifiers of candidate pixel segments with vertex identifiers in the current rendering style information. If the segment identity matches the vertex identity, indicating that the candidate pixel segment is a target pixel segment, the candidate pixel segment determined to be the target pixel segment may be rendered.
In one possible implementation manner, consistency matching may be performed from the vertices of the target mesh according to the vertex identifications in the current rendering style information, and when the matching result is consistent, the vertices are determined to be target vertices, then each target vertex is rasterized to obtain a plurality of target pixel fragments, and the target pixel fragments are rendered.
In one possible implementation manner, assuming that the mesh is a triangle mesh, the process of rasterizing the vertex to obtain the candidate pixel segment specifically includes: firstly, acquiring a triangle according to vertex indexes corresponding to input vertexes, extracting position information and related attributes of three vertexes from the triangle, and then calculating side information of the triangle, namely calculating interpolation related information of each side, wherein the interpolation related information comprises slope, side length and the like. Then calculating the minimum Y coordinate value and the maximum Y coordinate value of the current triangle, determining the range of the scanning line according to the two Y coordinate values, then scanning upwards line by line from the minimum Y coordinate value to the maximum Y coordinate value, finding out the intersection point with the triangle for each scanning line, then calculating the intersection point of the current scanning line and the triangle, calculating the X coordinate of the intersection point of the scanning line and the triangle by utilizing the Y coordinate and the side information of the scanning line, and then determining the starting point and the ending point of the scanning line according to the X coordinate of the two intersection points on the current scanning line, and obtaining the candidate pixel segment according to the starting point and the ending point of the scanning line. And then interpolating the candidate pixel fragments, obtaining attribute values corresponding to the candidate pixel fragments in the candidate pixel fragments by interpolation calculation according to the positions of the current candidate pixel fragments on the scanning line and attribute values of the triangle vertexes, such as texture coordinates, colors, normals and the like, and interpolating with the vertex identifications of the triangle vertexes so as to calculate fragment identifications of the fragments. Repeating the above process until the segment identifiers of all candidate pixel segments are obtained.
In one possible implementation manner, a simpler manner may be selected to obtain the segment identifier in the segment attribute of the candidate pixel segment, for example, rasterizing each vertex in the target mesh, taking each triangle mesh as a candidate pixel segment, and setting the vertex identifiers of all the pixel points included in the triangle mesh as the vertex identifiers of the vertices, that is, the segment identifier of the candidate pixel segment, that is, the vertex identifier of the vertex corresponding to the triangle.
In one possible implementation manner, the process of matching the segment identifier with the vertex identifier in the current rendering style information specifically includes: creating a global variable in a rendering pipeline, acquiring a vertex identifier from current rendering style information, and assigning the global variable as the vertex identifier; and traversing a plurality of candidate pixel fragments, and carrying out consistency matching on the fragment identification of the current candidate pixel fragment and the global variable.
Among other things, the rendering pipeline (Rendering Pipeline), also referred to as a graphics pipeline, is a flow of a series of processing stages and algorithms in a graphics processor for generating a final image. The rendering pipeline processes the input geometric data (such as vertices, line segments, triangles, etc.), renders and displays it, and converts it into a final image output. Rendering pipelines are a core component of graphics rendering, commonly used for real-time rendering applications. The rendering pipeline's process of functioning includes the following stages: and a geometric processing stage, which is responsible for transforming, clipping, projecting and other operations on the input geometric data, transforming the geometric data from the object space to the final clipping space, and generating two-dimensional coordinates on the screen. And in the rasterization stage, the geometric data is converted into pixel points on the screen, and the pixel points are mainly used for determining which pixels are covered and generating fragments or pixel fragments, including the positions and the attributes of the pixels. And a segment processing stage, wherein each segment is processed, such as texture sampling, illumination calculation, depth test and the like, and the final color value of the segment is calculated at the stage. And outputting a synthesis stage, namely synthesizing the processed fragments into a final image, wherein the stage comprises operations such as depth test, template test, pixel mixing and the like, and finally writing final color values into a frame buffer. The rendering pipeline converts the graphic data into a final image result through a series of processing steps, so that the display and interaction of real-time rendering application are facilitated.
In the above embodiments, the global variable created in the rendering pipeline is a unique variable, which is a type of global variable used to transfer data between different phases of the rendering pipeline. The unitorm variable remains unchanged throughout the rendering process, and the value of the variable may be set in the application and used in various stages of the rendering pipeline. The process of creating a unitorm variable by the rendering pipeline in this embodiment is: the variable type and variable name are first defined, and the key "unitorm" is used in the shader to declare unitorm variables. And setting the value of the unitorm variable, acquiring the reference or position of the unitorm variable in the application program, and transmitting the vertex identification acquired from the current rendering style information as a variable value to the variable. Then in the shader of each stage of the rendering pipeline, the unique variable is directly used, and by using the unique variable, data is transferred and shared between different stages of the rendering pipeline, for example, the same conversion matrix, illumination parameters and the like are shared in different stages, and each stage is not required to be manually transferred, so that the rendering efficiency can be improved.
Because the vertex identification obtained from the current rendering style information is used as a variable value to be endowed to a unitorm variable, when consistency matching is carried out, a plurality of candidate pixel fragments are traversed, the fragment identification of the current candidate pixel fragment and the vertex identification assigned in the global variable are subjected to consistency matching, so that a matching result of each candidate pixel fragment is obtained, and then the target pixel fragment indicated by the matching result is rendered.
The graphics processor receives a lot of objects corresponding to a rendering lot each time to render, and because each lot of objects may include rendering style information of a plurality of co-referenced target grids, each rendering style information corresponds to a scene node object generated, the graphics processor receives a plurality of scene node objects in the same lot of objects, and generates a draw call function for each scene node object, the draw call function is a drawing function method provided by the graphics interface, and the graphics drawing interface is called once when the draw call function is operated once. Each draw call function triggers execution of each stage of the rendering pipeline, finally, geometric data is converted into visible pixels on a screen, and after execution of the corresponding draw call function in the batch of objects is completed, a rendering result corresponding to the target grid is obtained.
In the embodiment of the application, if a plurality of grids to be rendered in the same batch of objects are associated with the same rendering style information, vertex attributes consistent with the rendering style information are added to the vertices of the corresponding grids to be rendered, meanwhile, each rendering style information corresponds to one draw call function, the calling times of the draw call function are related to the types of the rendering style information of the grids to be rendered in the rendering batch, the rendering requirements are decoupled from the number of the grids to be rendered, the number of the rendering style information is only related to the rendering requirements, even if the service requirements are increased, more grids are required to be rendered, vertex identifiers are only required to be generated according to the rendering style information corresponding to the service requirements, and corresponding vertex identifiers are added to the related vertices in the grids, so that more rendering batches are not required to be generated because of the increase of the grids, thereby meeting the service growth requirements on one hand, and reducing the possibility of rendering blocking on the other hand.
In one possible implementation, the segment identifier is matched with the vertex identifier in the current rendering style information in a consistent manner, so that two matching results are obtained, one matching result is consistent, the target pixel segment is obtained at the moment, the other matching result is non-matching, when the matching result is non-consistent, a discarding function in the fragment shader is called, the candidate pixel segment is cut in the target grid based on the discarding function, and the candidate pixel segment of the part is removed. Or when the matching result is inconsistent, setting the transparency of the candidate pixel fragments to a preset value.
The discarding function in the fragment shader is a discard function, and is used for discarding the candidate pixel fragment currently processed, i.e. telling the graphics processor that no further processing and outputting of the candidate pixel fragment is needed. In this embodiment, the discard function is used in combination with a conditional statement that the vertex identification matches, and when the matching result is that the segment identification of the candidate pixel segment is inconsistent with the vertex identification in the current rendering style information, the condition is satisfied, the discard function is executed, after the discard function is executed, the current candidate pixel segment is discarded, and is not output to the frame buffer, so that additional calculation and processing of the candidate pixel segment that does not need to be rendered are avoided. In addition, the transparency of the output color of the current candidate pixel segment may also be designated to be completely transparent by using the gl_fragcolor variable in the fragment shader, and the preset value is 100% at this time.
Referring to fig. 12, fig. 12 is a rendering diagram according to an embodiment of the present application. In the rendering process corresponding to a certain rendering style information in fig. 12, the target mesh includes a target pixel mesh and other candidate pixel meshes, where the segment identifiers are consistent with the vertex identifiers of the rendering style information, the target pixel mesh is represented by triangles, and the circles represent the other candidate pixel meshes in fig. 12. Assume that the rendering style information is "render to black". On the one hand, other candidate pixel grids can be removed by using a discarding function in the fragment shader, and only the screened target pixel grid is rendered to be black. On the other hand, the transparency of other candidate pixel grids can be set to be 100% by using the gl_fragcolor variable in the fragment shader, the transparency is indicated as 100% by using a dotted line in a rendered image in fig. 12, and 100% of transparency in an actual scene is not visible in the rendered image, and in addition, the target pixel grid is rendered to be black according to rendering style information.
Therefore, when the rendering style information is rendered, the process can be controlled to select the target pixel segment to be rendered according to the vertex identification, so that the rest segments are removed, the rendering effect is achieved only on the target pixel segment, or the rest segments are set to be fully transparent, and the rendering effect is achieved on the target pixel segment according to the rendering style information.
In one possible implementation, the rendering style information is executed in the rendering order of the traffic scene, i.e. the graphics processor calls the draw call function in order. Referring to fig. 13, fig. 13 is a rendering schematic diagram provided in an embodiment of the present application. Two rendering style information are shown in fig. 13, and assuming that the target pixel segment is rectangular, according to the rendering sequence, the first rendering style information is "diagonal stripe filling", and the second rendering style information is "black thickening frame", then a draw call function is sequentially executed, and diagonal stripe filling is firstly performed on the rectangle to obtain a diagonal stripe rectangle, and then black thickening frame is added on the diagonal stripe rectangle, so that a rendered rectangular frame can be obtained.
In one possible implementation, referring to fig. 14, fig. 14 is a schematic view of still another rendering provided by an embodiment of the present application. In the figure, the target mesh comprises three meshes to be rendered, three rendering style information are associated in the target mesh, the vertexes associated with the first rendering style information are assumed to be represented by open triangles, the vertexes associated with the second rendering style information are assumed to be represented by black filling triangles, and the vertexes associated with the third rendering style information are assumed to be represented by oblique filling triangles. Therefore, the graphics processor calls the corresponding draw call functions one by one, and non-target pixel fragments are removed by using a discarding function in the fragment shader during rendering. Firstly, screening out a target pixel segment related to the first rendering style information by using vertex identification for rendering, wherein the open triangle is used as the target pixel segment to be screened out for rendering. And screening out the target pixel fragments related to the second rendering style information by using the vertex mark for rendering, wherein the black solid triangle is used as the target pixel fragments to be screened out for rendering. And finally screening out the target pixel fragments related to the third rendering style information by using the vertex mark for rendering, wherein the hypotenuse filling triangle is used as the target pixel fragments to be screened out for rendering.
According to the embodiment of the application, the graphic processor receives the target grids and the rendering style information, screens out the target pixel fragments corresponding to the rendering style information according to the vertex identifications and renders, and therefore, even if a plurality of grids to be rendered are combined into the target grids, the graphic processor can split the target grids through the vertex identifications during rendering, screen out the target pixel fragments corresponding to the rendering style information and render, so that the number of grids is effectively reduced, the interaction frequency with the graphic processor is reduced, the jam phenomenon of image rendering is optimized, and the smoothness of image rendering is improved on the basis of independently rendering each grid to be rendered.
The processing procedures of the central processing unit and the graphic processing unit in the image rendering method of the embodiment of the present application are described in detail below.
Referring to fig. 15, fig. 15 is a schematic diagram of a system logic architecture according to an embodiment of the present application.
The rendering engine in fig. 15 includes: tile caching, style management, data engine, element modeling module, style decoration, geometric modeling toolbox, grid material transmission hub, and grid manager.
Wherein in a tile cache, the data to be rendered is divided into fixed-size tiles, each tile containing a small portion of data, e.g. a portion of an area of a map or a portion of an image, which tiles are organized in a hierarchical structure, which tiles can be rendered with different levels of detail. For example, road tiles may be graded into different grades, 0-1 grade, 3-4 grade, or other grade roads, with the different grades being rendered for different colors. When a particular region needs to be rendered, the rendering engine calculates the required tiles according to the current view position and zoom level. The tile cache is then first checked to see if corresponding tile data is already available. If the tile data already exists in the tile cache, the rendering engine may render directly using the data in the cache without retrieving the tile from the original data source. If the tile data is not in the cache, the rendering engine may retrieve the corresponding tile from the original data source and store it in the tile cache for later use. In this way, in subsequent renderings, the same tile can be fetched directly from the cache without having to access the original data source again, thereby reducing data access and loading time. The style management is used for managing all rendering style information used by the data to be rendered, and stores a rendering style information list. Style decorations are used to enhance the appearance and appeal of an object or scene by adding additional visual effects and decorations during its rendering. These additional effects may be textures, shadows, reflections, transparency, animations, etc. for making the object or scene more vivid and attractive. The data engine is used for managing and processing data related to rendering, is responsible for acquiring required data from different data sources, and processes and converts the required data according to rendering requirements so as to provide the data to the rendering engine for rendering. The geometric modeling toolbox is a set of tools and algorithms for creating and editing geometric models, and is provided to include: modeling tools including basic geometry generation, curve and surface modeling, combination and transformation operations, boolean operations, surface subdivision, solid modeling, grid editing and modification, parametric modeling, etc., are used to create and edit various complex geometries, implement imaginative scenes, roles, and objects, and perform realistic rendering and presentation in a rendering engine. The mesh texture transmission hub is a tool or plug-in for transmitting texture data between different mesh shapes, and is mainly used for creating and editing the texture (including color, texture, normal map, etc.) of one mesh model in three-dimensional modeling software, and then applying the texture to another mesh model. In general, it is difficult to transfer materials between different mesh shapes, because the number of faces, the number of vertices, texture mapping, etc. may be different between different shapes, which may result in material data not being correctly mapped to the new shape. The grid material transmission center has the functions of material sampling, attribute matching, material transmission, material adjustment and the like, the material of one grid model can be applied to the other grid model, the material does not need to be manually edited and adjusted, and the two models can obtain consistent rendering results.
The element modeling module is used for adding vertex identifications into rendering style information and vertex attributes of the associated grids to be rendered, adding corresponding vertex identifications into the rendering style information and the vertex attributes, and marking the rendering style information corresponding to each vertex through the vertex identifications without depending on the grids to be rendered and index positions thereof in a vertex index list. If the service requirement is updated and the rendering style information of part of vertexes needs to be changed, vertex identifications corresponding to the new rendering style information are acquired, and vertex attributes of the corresponding vertexes are updated by utilizing the vertex identifications. By adding vertex identifications, the flexibility and expansibility of the rendering pipeline can be enhanced, and which vertices to render can be selected according to business needs so as to meet specific rendering requirements. The embodiment of the application adds the vertex mark into the vertex attribute, can provide a more flexible, easy-to-manage and operate data structure of the vertex, and can realize local custom rendering.
The grid manager is used for merging a plurality of grids to be rendered according to grid types, merging the grids to be rendered corresponding to the same grid type together, and further obtaining target grids corresponding to each type, so that the number of grids is effectively reduced, and the grid manager is also used for configuring the target grids into grids commonly referenced by the corresponding rendering style information.
The rendering engine of the image processor of fig. 15 includes: the system comprises a Unity grid generation module, a material creation module, a shader script module, global scene management and global material management.
Wherein global scene management is used in the rendering process to control scene loading and switching, scene visibility control, and scene data persistence. Global scene management enables loading and switching of different scenes, ensuring that the correct scene is rendered during the rendering process. And the visibility of the scene is set according to the requirement, only the current active and visible scene is rendered, the rendering cost is reduced, and the rendering performance is improved. In addition, the saving and restoring of the scene data are processed, and the transmission and saving of important data during scene switching are ensured. Global texture management is used in the rendering process to control texture creation and loading, texture attribute setting, texture application and replacement, and texture sharing and multiplexing. Global texture management enables creation and loading of texture instances, ready for application to texture properties on a geometry. Material properties of the geometry, such as color, texture, transparency, etc., may also be set, which may affect the appearance of the geometry during rendering, and different material property settings may achieve various effects, such as reflection, refraction, illumination, etc. And is also used for applying the material to the renderer of the geometric body so that the material displays the set attributes in the rendering process. In addition, sharing and multiplexing of materials are optimized, memory occupation and rendering overhead are reduced, the same materials can be shared among a plurality of geometric bodies, and repeated creation and loading of the same materials are avoided. In the embodiment of the application, the global scene management and the global material management ensure that the correct scene is rendered in the rendering process through a uniform interface and a uniform mechanism, and the geometric body is rendered by using the correct material attribute according to the requirement.
In addition, the Unity grid generation module is used for receiving the target grid, converting the target grid into a rendering structure defined by Unity, and adding the rendering structure into the global scene management of Unity. In addition, the material creation module invokes an interface of a material according to the material attribute in each piece of received rendering style information, sets various attribute values of the material, such as color, texture, transparency, reflectivity, and the like, and applies the material to a renderer to be rendered, so that the appearance after rendering accords with the set material attribute. And the shader script module is used for screening out vertexes corresponding to each rendering style information from the target grid according to the grouping identification to render.
From the foregoing, it can be seen that the rendering process of the embodiments of the present application is performed by both the central processor and the graphics processor. The CPU and the graphics processor can be independently arranged or integrated. This embodiment is not limited thereto.
Referring to fig. 16, fig. 16 is an overall process flow diagram of an image rendering method according to an embodiment of the present application.
Firstly, a grid modeling stage of a central processing unit acquires a plurality of grids to be rendered and rendering style information associated with each grid to be rendered, then generates vertex identifications corresponding to the rendering style information, adds the vertex identifications to the rendering style information and adds the vertex identifications to vertex attributes of the associated grids to be rendered.
The method comprises the steps that a grid to be rendered is obtained by one or more non-rendered grids according to service rendering requirements, and rendering style information associated with the grid to be rendered refers to related attribute parameters such as appearance, effect and the like executed for the grid to be rendered according to the service requirements, and is used for describing the style of the grid to be rendered in a rendered image. Rendering style information in this embodiment includes parameters such as color and texture properties, illumination properties, transparency properties, reflection properties, and the like.
In order to correlate the relationship between the rendering style information and the grid to be rendered, vertex identifications are generated for each rendering style information, and the vertex identifications are correlated with the rendering style information and the grid to be rendered, so that the unique relationship between the rendering style information and the grid to be rendered is ensured, and collision or confusion in the rendering process is avoided. And by using the vertex identifications, the needed style of the grid to be rendered can be obtained according to the rendering style information corresponding to the vertex identifications in different rendering scenes, so that personalized and customized rendering effects are realized.
In one possible implementation, there are three cases of different grid to be rendered requirements, the first case being: the same grid to be rendered is associated with a plurality of different rendering style information, a second grid identifier for marking the grid to be rendered is obtained at the moment, and vertex identifiers of the plurality of different rendering style information are set to be the second grid identifier, so that the grid to be rendered and the corresponding plurality of rendering style information are corresponding by utilizing the second network identifier. The second case is: the multiple different grids to be rendered are associated with the same rendering style information, at this time, the vertex mark of the rendering style information is the style mark of the rendering style information, and the style mark can be used for distinguishing different rendering style information. The third case is: different grids to be rendered are respectively associated with different rendering style information, and at the moment, the vertex mark can be a second grid mark corresponding to the grids to be rendered or a style mark corresponding to the rendering style information. In summary, the three ways can effectively realize the association between the mesh to be rendered and the rendering style information, so that different style information can be applied according to an expected way in the rendering process.
In one possible implementation manner, the number of grids to be rendered is multiple, each grid to be rendered comprises multiple vertices, and rendering the grids to be rendered by using the rendering style information is to render the vertices by using the rendering style information substantially, so that vertex identifiers corresponding to the rendering style information are added in vertex attributes of the vertices in the grids to be rendered in the embodiment of the application.
In addition, if different areas in the mesh to be rendered have different rendering requirements, the mesh to be rendered can be divided into sub-meshes, and in addition, the sub-meshes can be respectively associated with the same or different rendering style information. Dividing the mesh to be rendered into a plurality of sub-meshes, and obtaining vertex identifications corresponding to the associated rendering style information according to the second mesh identifications, wherein the process specifically comprises the following steps of: and acquiring a sub-grid identifier for marking the sub-grid, and splicing the second grid identifier and the sub-grid identifier to obtain vertex identifiers corresponding to the associated rendering style information.
By adding corresponding vertex identifications to the rendering style information and the vertex attributes, the rendering style information corresponding to each vertex can be marked through the vertex identifications without depending on the grid to be rendered and the index position of the grid in the vertex index list.
Referring to fig. 16, the batch object generation phase of the cpu is then entered. And merging the multiple grids to be rendered to obtain at least one target grid. The method comprises the steps of combining a plurality of grids to be rendered according to grid types, combining the grids to be rendered corresponding to the same grid type together, further obtaining target grids corresponding to each type, and effectively reducing the number of grids. Wherein the mesh type is used to indicate the type of element modeled by the mesh to be rendered. The types of elements herein may be set according to specific application scenarios.
In one possible implementation, configuring the target mesh as a mesh commonly referenced by corresponding rendering style information includes: generating a first grid identification of the target grid, wherein the first grid identification is used for marking different target grids; and then, respectively taking each target grid as a rendering batch, respectively reading the target grids from the grid cache region according to the first grid identification for each rendering style information corresponding to the target grids in the current rendering batch, and configuring the read target grids as grids commonly referenced by the rendering style information.
Referring to fig. 16, the scene node object creation phase of the central processor is then entered. After generating a rendering batch by using a batch processor, in the current rendering batch, for each rendering style information corresponding to a target grid, respectively reading the target grid from a grid cache area according to a first grid identifier, and configuring the read target grid as a grid commonly referred by the rendering style information, wherein the process specifically comprises the following steps of: traversing a pattern object list in the batch objects, obtaining a current pattern object in the pattern object list, then creating a scene node object for the current pattern object, creating a material object according to the pattern object, binding the material object with the scene node object, adding vertex identifications in the pattern object to the material object, finally reading a target grid from a grid cache area according to a first grid identification, and configuring the read target grid as a grid commonly referred by the scene node object.
Wherein, the scene node object comprises a Mesh Filter (Mesh Filter) component and a Mesh Renderer (Mesh Renderer) component. A mesh filter component is a component for storing and managing geometric mesh data of an object, including vertex coordinates, triangle index, texture coordinates, normals, and the like. The mesh filter component then provides the stored geometric mesh data to a mesh renderer component, which sends the target mesh and the plurality of rendering style information provided by the mesh filter component to the shader for rendering.
Each style object in the style object list corresponds to one scene node object, and all scene node objects are mounted to a shared grid corresponding to the target grid in a sharing mode. The shared grid corresponding to the target grid is assigned to the shared Mesh attribute of the grid filter component of the scene node object. Multiple scene node objects can share the same target grid, so that memory occupation and rendering call can be reduced, and rendering performance and efficiency are improved.
Referring to fig. 16, a rendering stage of the graphics processor is entered. The method comprises the steps that a shader of a graphic processor obtains a target grid and a plurality of rendering style information, material objects corresponding to vertex identifications and the rendering style information are obtained from the target grid and the rendering style information, and then target pixel fragments corresponding to the rendering style information are screened out according to the vertex identifications to render, so that a rendering result is obtained.
In one possible implementation manner, the process of screening out the target pixel segments corresponding to each rendering style information according to the vertex identification to render specifically includes: and rasterizing each vertex in the target grid to obtain a plurality of candidate pixel fragments, wherein the fragment attributes of the candidate pixel fragments comprise fragment identifications, and the fragment identifications are interpolated by vertex identifications contained in vertex attributes in corresponding vertices. And then, carrying out consistency matching on the segment identification and the vertex identification in the current rendering style information, and when the matching result is consistent, determining the candidate pixel segment as a target pixel segment and rendering the target pixel segment.
The method comprises the steps of selecting current rendering style information, and comparing segment identifiers of candidate pixel segments with vertex identifiers in the current rendering style information. If the segment identity matches the vertex identity, indicating that the candidate pixel segment is a target pixel segment, the candidate pixel segment determined to be the target pixel segment may be rendered.
In one possible implementation manner, consistency matching may be performed from the vertices of the target mesh according to the vertex identifications in the current rendering style information, and when the matching result is consistent, the vertices are determined to be target vertices, then each target vertex is rasterized to obtain a plurality of target pixel fragments, and the target pixel fragments are rendered.
In one possible implementation manner, the segment identifier is matched with the vertex identifier in the current rendering style information, so that two matching results are obtained, one matching result is consistent, the target pixel segment is obtained at the moment, the other matching result is non-matching, when the matching result is inconsistent, a discarding function in the fragment shader is called, the candidate pixel segment is cut in the target grid based on the discarding function, and the candidate pixel segment of the part is removed. Or when the matching result is inconsistent, setting the transparency of the candidate pixel fragments to a preset value.
Because the vertex attributes of the vertices comprise vertex identifications corresponding to the rendering style information, when the graphics processor renders the rendering style information one by one, the graphics processor can screen out the vertices corresponding to each rendering style information as target vertices according to the vertex identifications, and the target vertices and the rendering style information have the same vertex identifications. And then converting the target vertex into a target pixel segment, and rendering the target pixel segment. Therefore, even if the graphics processor receives the target grids combined by the multiple grids to be rendered, the graphics processor can split the target grids through vertex identifications during rendering, and screen out target pixel fragments corresponding to each rendering style information, so that the foundation of independently rendering each grid to be rendered is realized, interaction with a large number of grids to be rendered is not needed during rendering, the interaction frequency is reduced, the clamping phenomenon of image rendering can be optimized, and the smoothness of image rendering is improved.
It can be understood that in the embodiment of the application, by acquiring a plurality of grids to be rendered and the rendering style information associated with each grid to be rendered, generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and the vertex attributes of the associated grids to be rendered, marking the vertices corresponding to the rendering style information through the vertex identifications, merging the grids to be rendered to obtain at least one target grid, configuring the target grid as the grid commonly referred to by the corresponding rendering style information, thereby effectively reducing the number of grids, and sending the target grid and each rendering style information to a graphics processor for the graphics processor to screen out target pixel fragments corresponding to each rendering style information for rendering according to the vertex identifications.
It will be appreciated that, although the steps in the flowcharts described above are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order unless explicitly stated in the present embodiment, and may be performed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of steps or stages that are not necessarily performed at the same time but may be performed at different times, and the order of execution of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a portion of the steps or stages in other steps or other steps.
Referring to fig. 17, fig. 17 is a schematic diagram illustrating an alternative configuration of an image rendering apparatus according to an embodiment of the present application, where the first image rendering apparatus 1700 includes:
the information acquisition module 1701: the method comprises the steps of acquiring a plurality of grids to be rendered and rendering style information associated with each grid to be rendered;
vertex identification acquisition module 1702: the vertex identification is used for generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and adding the vertex identifications to the vertex attributes of the associated grids to be rendered;
The mesh merge module 1703: the method comprises the steps of merging a plurality of grids to be rendered to obtain at least one target grid, and configuring the target grid into a grid commonly referenced by corresponding rendering style information;
rendering information sending module 1704: and the graphics processor is used for screening out target pixel fragments corresponding to each rendering style information according to the vertex identifications and rendering the target pixel fragments, wherein the target pixel fragments are obtained by converting target vertices in the target grids, and the target vertices and the rendering style information have the same vertex identifications.
Further, the mesh merging module 1703 is specifically configured to:
generating a first grid identification of the target grid, wherein the first grid identification is used for marking different target grids;
and respectively taking each target grid as a rendering batch, respectively reading the target grids from the grid cache area according to the first grid identification for each rendering style information corresponding to the target grids in the current rendering batch, and configuring the read target grids as grids commonly referenced by the rendering style information.
Further, rendering style information is configured as a style object, and the mesh merging module 1703 is further configured to:
Merging a plurality of style objects corresponding to the same target grid into a style object list;
and constructing batch objects of the rendering batch where the target grid is positioned according to the first grid identification, the target grid and the style object list.
Further, the mesh merging module 1703 is further configured to:
traversing a style object list in the batch objects to obtain a current style object in the style object list;
creating a scene node object for the current style object, creating a material object according to the style object, binding the material object with the scene node object, and adding a vertex mark in the style object to the material object;
and reading the target grid from the grid cache area according to the first grid identification, and configuring the read target grid as the grid commonly referenced by the scene node objects.
Further, the target mesh is obtained by merging a plurality of meshes to be rendered with the same mesh type, the mesh type is used for indicating the type of the element modeled by the meshes to be rendered, and the mesh merging module 1703 is further used for:
acquiring a grid type of a grid to be rendered and a style type of rendering style information;
and splicing the grid type and the style type to obtain a first grid identification of the target grid.
Further, the mesh merging module 1703 is further configured to:
and when the target grid cannot be read from the grid cache area according to the first grid identification, writing the first grid identification and the target grid association into the grid cache area.
Further, the vertex identification obtaining module 1702 is further configured to:
when the same grid to be rendered is respectively associated with different rendering style information, acquiring a second grid identifier of the current grid to be rendered, and acquiring vertex identifiers corresponding to the associated rendering style information according to the second grid identifier, wherein the second grid identifier is used for marking the different grids to be rendered;
when the same rendering style information is respectively associated with different grids to be rendered, style identifiers of the rendering style information are obtained, the style identifiers are used as vertex identifiers corresponding to the rendering style information, and the style identifiers are used for marking different rendering style information;
when different grids to be rendered are respectively associated with different rendering style information, a second grid identifier is obtained, and a vertex identifier corresponding to the associated rendering style information is obtained according to the second grid identifier, or a style identifier is obtained, and the style identifier is used as the vertex identifier corresponding to the rendering style information.
Further, the mesh to be rendered is divided into a plurality of sub-meshes, each sub-mesh is associated with different rendering style information, and the vertex identification obtaining module 1702 is further configured to:
acquiring a sub-grid identification for marking the sub-grid;
and splicing the second grid mark and the sub-grid mark to obtain vertex marks corresponding to the associated rendering style information.
The above-mentioned first image rendering device 1700 and the image rendering method based on the central processing unit are based on the same inventive concept, and the above-mentioned process describes the image rendering method according to the embodiments of the present application, by obtaining a plurality of grids to be rendered and rendering pattern information associated with each grid to be rendered, generating vertex identifiers corresponding to the rendering pattern information, adding the vertex identifiers to the rendering pattern information and vertex attributes of the associated grids to be rendered, marking vertices corresponding to the rendering pattern information by the vertex identifiers, merging the plurality of grids to be rendered to obtain at least one target grid, configuring the target grid as a grid commonly referred to by the corresponding rendering pattern information, thereby effectively reducing the number of grids, and then sending the target grid and each rendering pattern information to the graphics processor, so that the graphics processor screens out target pixel segments corresponding to each rendering pattern information according to the vertex identifiers, and can split the target grids when the graphics processor merges the plurality of grids into the target grid, and screen out target pixel segments corresponding to each rendering pattern information, thereby effectively reducing the number of render graphics on the independent grids, and optimizing the number of graphics, thereby effectively reducing the number of rendering graphics and rendering graphics.
Referring to fig. 18, fig. 18 is a schematic diagram illustrating an alternative structure of an image rendering apparatus according to an embodiment of the present application, and the second image rendering apparatus 1800 includes:
rendering information receiving module 1801: the method comprises the steps of acquiring a target grid and a plurality of pieces of rendering style information, wherein the target grid is obtained by combining grids to be rendered which are associated with each piece of rendering style information, vertex attributes of the grids to be rendered and the rendering style information comprise vertex identifiers, and the target grid is configured as a grid commonly referred by the rendering style information;
rendering module 1802: and the target pixel fragments are obtained by converting target vertexes in the target grid, and the target vertexes and the rendering style information have the same vertex identification.
Further, the rendering module 1802 is specifically configured to:
rasterizing each vertex in the target grid to obtain a plurality of candidate pixel fragments, wherein the fragment attribute of the candidate pixel fragment comprises fragment identifications, and the fragment identifications are obtained by interpolation of vertex identifications in corresponding vertices;
and carrying out consistency matching on the segment identification and the vertex identification in the current rendering style information, and when the matching result is consistent, determining the candidate pixel segment as a target pixel segment and rendering the target pixel segment.
Further, the rendering module 1802 is further configured to:
creating a global variable in a rendering pipeline, acquiring a vertex identifier from current rendering style information, and assigning the global variable as the vertex identifier;
traversing a plurality of candidate pixel fragments, and carrying out consistency matching on the fragment identification of the current candidate pixel fragment and the global variable.
Further, the rendering module 1802 is further configured to:
when the matching result is inconsistent, calling a discarding function in the fragment shader, and cutting the candidate pixel fragments based on the discarding function;
or when the matching result is inconsistent, setting the transparency of the candidate pixel fragments to a preset value.
The second image rendering device 1800 and the image rendering method based on the graphics processor are based on the same inventive concept, and the above process describes the image rendering method according to the embodiments of the present application, where when the graphics processor renders the rendering style information one by one, the graphics processor can screen out the vertex corresponding to each rendering style information as the target vertex according to the vertex identifier, and the target vertex and the rendering style information have the same vertex identifier. And then converting the target vertex into a target pixel segment, and rendering the target pixel segment. Therefore, even if the graphics processor receives the target grids combined by the multiple grids to be rendered, the graphics processor can split the target grids through vertex identifications during rendering, and screen out target pixel fragments corresponding to each rendering style information, so that the foundation of independently rendering each grid to be rendered is realized, interaction with a large number of grids to be rendered is not needed during rendering, the interaction frequency is reduced, the clamping phenomenon of image rendering can be optimized, and the smoothness of image rendering is improved.
The electronic device for executing the image rendering method according to the embodiment of the present application may be a terminal, and referring to fig. 19, fig. 19 is a partial block diagram of the terminal according to the embodiment of the present application, where the terminal includes: a camera module 1910, a first memory 1920, an input unit 1930, a display unit 1940, a sensor 1950, an audio circuit 1960, a wireless fidelity (wireless fidelity, abbreviated as WiFi) module 1970, a first processor 1980, a power supply 1990, and the like. It will be appreciated by those skilled in the art that the terminal structure shown in fig. 19 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The camera assembly 1910 may be used to capture images or video. Optionally, the camera assembly 1910 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions.
The first memory 1920 may be used to store software programs and modules, and the first processor 1980 executes various functional applications and data processing of the terminal by executing the software programs and modules stored in the first memory 1920.
The input unit 1930 may be used to receive input numerical or character information and to generate key signal inputs related to the setting and function control of the terminal. In particular, the input unit 1930 may include a touch panel 1931 and other input devices 1932.
The display unit 1940 may be used to display input information or provided information and various menus of the terminal. The display unit 1940 may include a display panel 1941.
The audio circuitry 1960, speaker 1961, and microphone 1962 may provide an audio interface.
The power source 1990 may be alternating current, direct current, disposable or rechargeable.
The number of sensors 1950 may be one or more, the one or more sensors 1950 including, but not limited to: acceleration sensors, gyroscopic sensors, pressure sensors, optical sensors, etc. Wherein:
the acceleration sensor may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal. For example, an acceleration sensor may be used to detect the components of gravitational acceleration in three coordinate axes. The first processor 1980 may control the display unit 1940 to display the user interface in a lateral view or a longitudinal view according to the gravitational acceleration signal acquired by the acceleration sensor. The acceleration sensor may also be used for the acquisition of motion data of a game or a user.
The gyroscope sensor can detect the body direction and the rotation angle of the terminal, and the gyroscope sensor can be cooperated with the acceleration sensor to collect the 3D action of the user on the terminal. The first processor 1980 may implement the following functions based on the data collected by the gyro sensor: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor may be disposed at a side frame of the terminal and/or a lower layer of the display unit 1940. When the pressure sensor is disposed at a side frame of the terminal, a grip signal of the terminal by a user may be detected, and left-right hand recognition or shortcut operation may be performed by the first processor 1980 based on the grip signal collected by the pressure sensor. When the pressure sensor is disposed at the lower layer of the display unit 1940, control of the operability control on the UI interface is achieved by the first processor 1980 according to the pressure operation of the display unit 1940 by the user. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor is used to collect the ambient light intensity. In one embodiment, first processor 1980 may control the display brightness of display unit 1940 based on the intensity of ambient light collected by the optical sensor. Specifically, when the intensity of the ambient light is high, the display luminance of the display unit 1940 is turned up; when the ambient light intensity is low, the display brightness of the display unit 1940 is turned down. In another embodiment, first processor 1980 may also dynamically adjust the shooting parameters of camera assembly 1910 based on the intensity of ambient light collected by the optical sensor.
In this embodiment, the first processor 1980 included in the terminal may perform the image rendering method of the previous embodiment.
The electronic device for executing the image rendering method according to the embodiment of the present application may also be a server, referring to fig. 20, fig. 20 is a partial block diagram of the server according to the embodiment of the present application, where the server 2000 may have a relatively large difference due to different configurations or performances, and may include one or more second processors 2022 and a second memory 2032, and one or more storage media 2030 (such as one or more mass storage devices) storing the application 2042 or the data 2044. Wherein the second memory 2032 and the storage medium 2030 may be transitory or persistent. The program stored in the storage medium 2030 may include one or more modules (not shown), each of which may include a series of instruction operations in the server 2000. Still further, the second processor 2022 may be arranged to communicate with a storage medium 2030, and execute a series of instruction operations in the storage medium 2030 on the server 2000.
The server 2000 may also include one or more power supplies 2026, one or more wired or wireless network interfaces 2050, one or more input/output interfaces 2058, and/or one or more operating systems 2041, such as Windows Server TM, mac OS XTM, unix TM, linux, free BSDTM, and the like.
The second processor 2022 in the server 2000 may be used to perform an image rendering method.
The embodiments of the present application also provide a computer-readable storage medium storing program code for executing the image rendering method of the foregoing embodiments.
Embodiments of the present application also provide a computer program product comprising a computer program stored on a computer readable storage medium. A processor of a computer device reads the computer program from a computer-readable storage medium, and the processor executes the computer program so that the computer device performs the image rendering method described above.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate to describe embodiments of the application such as capable of being practiced otherwise than as shown or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It should be understood that in the description of the embodiments of the present application, plural (or multiple) means two or more, and that greater than, less than, exceeding, etc. are understood to not include the present number, and that greater than, less than, within, etc. are understood to include the present number.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should also be appreciated that the various embodiments provided by the embodiments of the present application may be arbitrarily combined to achieve different technical effects.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit and scope of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.

Claims (16)

1. An image rendering method, comprising:
acquiring a plurality of grids to be rendered and rendering style information associated with each grid to be rendered;
generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and adding the vertex identifications to the vertex attributes of the associated grids to be rendered;
merging the grids to be rendered to obtain at least one target grid, and configuring the target grid into a grid commonly referenced by the corresponding rendering style information;
and sending the target grid and each piece of rendering style information to a graphics processor for the graphics processor to screen out target pixel fragments corresponding to each piece of rendering style information according to the vertex identifications for rendering, wherein the target pixel fragments are obtained based on target vertex conversion in the target grid, and the target vertex and the rendering style information have the same vertex identifications.
2. The image rendering method according to claim 1, wherein the configuring the target mesh as a mesh commonly referenced by the corresponding rendering style information includes:
generating a first grid identification of the target grid, wherein the first grid identification is used for marking different target grids;
and respectively taking each target grid as a rendering batch, respectively reading the target grids from a grid cache area according to the first grid identification for each rendering pattern information corresponding to the target grids in the current rendering batch, and configuring the read target grids as grids commonly referred by the rendering pattern information.
3. The image rendering method according to claim 2, wherein the rendering style information is configured as style objects, and the rendering of each of the target meshes as one rendering batch includes:
merging a plurality of style objects corresponding to the same target grid into a style object list;
and constructing batch objects of the rendering batch where the target grid is positioned according to the first grid identification, the target grid and the style object list.
4. The image rendering method according to claim 3, wherein the reading the target mesh from the mesh buffer according to the first mesh identifier, respectively, for each of the rendering style information corresponding to the target mesh, and configuring the read target mesh as a mesh commonly referred to by the rendering style information, includes:
traversing the style object list in the batch of objects to acquire the current style object in the style object list;
creating a scene node object for the current style object, creating a material object according to the style object, binding the material object with the scene node object, and adding the vertex identification in the style object to the material object;
and reading the target grid from a grid cache area according to the first grid identification, and configuring the read target grid into grids commonly referenced by the scene node objects.
5. The image rendering method according to claim 2, wherein the target grid is obtained by combining a plurality of grids to be rendered with the same grid type, the grid type is used for indicating the type of the element modeled by the grid to be rendered, and the generating the first grid identifier of the target grid includes:
Acquiring the grid type of the grid to be rendered and the style type of the rendering style information;
and splicing the grid type and the style type to obtain a first grid identification of the target grid.
6. The image rendering method according to claim 2, wherein after the target mesh is read from the mesh buffer according to the first mesh identification, the image rendering method further comprises:
and when the target grid cannot be read from the grid cache area according to the first grid identification, writing the first grid identification and the target grid association into the grid cache area.
7. The image rendering method according to claim 1, wherein the generating the vertex identifications corresponding to the rendering style information includes:
when the same grid to be rendered is respectively associated with different rendering style information, acquiring a second grid identifier of the current grid to be rendered, and obtaining vertex identifiers corresponding to the associated rendering style information according to the second grid identifier, wherein the second grid identifier is used for marking the different grids to be rendered;
When the same rendering style information is respectively associated with different grids to be rendered, style identifiers of the rendering style information are obtained, and the style identifiers are used as vertex identifiers corresponding to the rendering style information, wherein the style identifiers are used for marking different rendering style information;
when different grids to be rendered are respectively associated with different rendering style information, acquiring the second grid identification, and acquiring vertex identifications corresponding to the associated rendering style information according to the second grid identification, or acquiring the style identifications, and taking the style identifications as the vertex identifications corresponding to the rendering style information.
8. The image rendering method according to claim 7, wherein the mesh to be rendered is divided into a plurality of sub-meshes, each sub-mesh is associated with different rendering style information, the obtaining vertex identifications corresponding to the associated rendering style information according to the second mesh identifications includes:
acquiring a sub-grid identification for marking the sub-grid;
and splicing the second grid identification and the sub-grid identification to obtain vertex identifications corresponding to the associated rendering style information.
9. An image rendering method, comprising:
obtaining a target grid and a plurality of rendering style information, wherein the target grid is obtained by combining grids to be rendered associated with each rendering style information, vertex attributes of the grids to be rendered and the rendering style information both comprise vertex identifications, and the target grid is configured as a grid commonly referred by the rendering style information;
and screening out target pixel fragments corresponding to the rendering style information according to the vertex identifications, and rendering, wherein the target pixel fragments are obtained by converting target vertices in the target grid, and the target vertices and the rendering style information have the same vertex identifications.
10. The image rendering method according to claim 9, wherein the screening out the target pixel segments corresponding to the rendering style information according to the vertex identifications to render includes:
rasterizing each vertex in the target grid to obtain a plurality of candidate pixel fragments, wherein fragment attributes of the candidate pixel fragments comprise fragment identifications, and the fragment identifications are interpolated by the vertex identifications of the corresponding vertices;
And carrying out consistency matching on the segment identification and the vertex identification in the current rendering style information, determining the candidate pixel segment as a target pixel segment when the matching result is consistent, and rendering the target pixel segment.
11. The image rendering method of claim 10, wherein the consistency matching of the segment identification with the vertex identification in the current rendering style information comprises:
creating a global variable in a rendering pipeline, acquiring the vertex identification from the current rendering style information, and assigning the global variable as the vertex identification;
traversing a plurality of candidate pixel fragments, and carrying out consistency matching on the fragment identification of the current candidate pixel fragment and the global variable.
12. The image rendering method according to claim 10, characterized in that the image rendering method further comprises:
when the matching result is inconsistent, calling a discarding function in a fragment shader, and cutting the candidate pixel fragments based on the discarding function;
or when the matching result is inconsistent, setting the transparency of the candidate pixel fragments to a preset value.
13. An image rendering apparatus, comprising:
an information acquisition module: the method comprises the steps of obtaining a plurality of grids to be rendered and rendering style information associated with each grid to be rendered;
the vertex identification acquisition module: the vertex identification is used for generating vertex identifications corresponding to the rendering style information, adding the vertex identifications to the rendering style information and adding the vertex identifications to the vertex attributes of the associated grids to be rendered;
and a grid merging module: the method comprises the steps of merging a plurality of grids to be rendered to obtain at least one target grid, and configuring the target grid into grids commonly referenced by the corresponding rendering style information;
a rendering information sending module: and the target pixel segments are obtained based on target vertex conversion in the target grid, and the target vertex and the rendering style information have the same vertex identification.
14. An image rendering apparatus, comprising:
Rendering information receiving module: the method comprises the steps of acquiring a target grid and a plurality of pieces of rendering style information, wherein the target grid is obtained by combining grids to be rendered which are associated with each piece of rendering style information, vertex attributes of the grids to be rendered and the rendering style information comprise vertex identifiers, and the target grid is configured as a grid commonly referred by the rendering style information;
and a rendering module: and the target pixel fragments are obtained by converting target vertexes in the target grids, and the target vertexes and the rendering style information have the same vertex identification.
15. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the image rendering method of any one of claims 1 to 12 when executing the computer program.
16. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the image rendering method of any one of claims 1 to 12.
CN202311200361.6A 2023-09-18 2023-09-18 Image rendering method and device, electronic equipment and storage medium Active CN117011492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311200361.6A CN117011492B (en) 2023-09-18 2023-09-18 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311200361.6A CN117011492B (en) 2023-09-18 2023-09-18 Image rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117011492A true CN117011492A (en) 2023-11-07
CN117011492B CN117011492B (en) 2024-01-05

Family

ID=88567446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311200361.6A Active CN117011492B (en) 2023-09-18 2023-09-18 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117011492B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN113947657A (en) * 2021-10-18 2022-01-18 网易(杭州)网络有限公司 Target model rendering method, device, equipment and storage medium
CN116012507A (en) * 2022-12-23 2023-04-25 星臻科技(上海)有限公司 Rendering data processing method and device, electronic equipment and storage medium
CN116245999A (en) * 2023-05-09 2023-06-09 小米汽车科技有限公司 Text rendering method and device, electronic equipment and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
WO2022116759A1 (en) * 2020-12-03 2022-06-09 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and computer device and storage medium
CN113947657A (en) * 2021-10-18 2022-01-18 网易(杭州)网络有限公司 Target model rendering method, device, equipment and storage medium
CN116012507A (en) * 2022-12-23 2023-04-25 星臻科技(上海)有限公司 Rendering data processing method and device, electronic equipment and storage medium
CN116245999A (en) * 2023-05-09 2023-06-09 小米汽车科技有限公司 Text rendering method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN117011492B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
US8493380B2 (en) Method and system for constructing virtual space
US8217990B2 (en) Stereoscopic picture generating apparatus
CN111784833A (en) WebGL-based flood evolution situation three-dimensional dynamic visualization display method
US8495066B2 (en) Photo-based virtual world creation system for non-professional volunteers
US20130300740A1 (en) System and Method for Displaying Data Having Spatial Coordinates
US20070139408A1 (en) Reflective image objects
JPH02287776A (en) Method for adopting hierarchical display list in global rendering
CN115049811B (en) Editing method, system and storage medium for digital twin virtual three-dimensional scene
CN112419499B (en) Immersive situation scene simulation system
CN104867175A (en) Real-scene displaying device for virtual effect picture and implementing method therefor
CN114820990B (en) Digital twin-based river basin flood control visualization method and system
CN112884874A (en) Method, apparatus, device and medium for applying decals on virtual model
CN114756937A (en) Visualization system and method based on UE4 engine and Cesium framework
US9401044B1 (en) Method for conformal visualization
US11625900B2 (en) Broker for instancing
KR101919077B1 (en) Method and apparatus for displaying augmented reality
CN117095110A (en) Sequence-independent transparent rendering method and system for Internet three-dimensional map
CN117011492B (en) Image rendering method and device, electronic equipment and storage medium
CN115690344A (en) Sponge city sand table and weather simulation system
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
Kennie et al. Modelling for digital terrain and landscape visualisation
Giertsen et al. An open system for 3D visualisation and animation of geographic information
Diepstraten et al. Automatic generation and non-photorealistic rendering of 2+ 1D Minkowski diagrams
WO2024027237A1 (en) Rendering optimization method, and electronic device and computer-readable storage medium
Petrasova et al. Real-Time 3D Rendering and Immersion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40097762

Country of ref document: HK