CN117523072A - Rendering processing method, apparatus, device and computer readable storage medium - Google Patents

Rendering processing method, apparatus, device and computer readable storage medium Download PDF

Info

Publication number
CN117523072A
CN117523072A CN202311433604.0A CN202311433604A CN117523072A CN 117523072 A CN117523072 A CN 117523072A CN 202311433604 A CN202311433604 A CN 202311433604A CN 117523072 A CN117523072 A CN 117523072A
Authority
CN
China
Prior art keywords
target
primitive
fragment
vertex
strip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311433604.0A
Other languages
Chinese (zh)
Inventor
薛程
田宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311433604.0A priority Critical patent/CN117523072A/en
Publication of CN117523072A publication Critical patent/CN117523072A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a rendering processing method, a rendering processing device, rendering processing equipment and a computer readable storage medium. The embodiment of the application can be applied to scenes such as cloud technology, intelligent traffic, auxiliary driving, maps and the like. The vertex data of each vertex on the strip-shaped element in the target scene can be obtained to carry out geometric processing so as to convert each vertex on the strip-shaped element into a screen space; target primitives which intersect with target primitives to which each vertex belongs exist in a screen space; determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive, and coloring the target primitive belonging to the strip-shaped element according to the weight of each target primitive to obtain a first color rendering diagram; and fusing the first color rendering diagram with the color rendering diagrams of other elements in the target scene according to the weight of each target fragment so as to obtain a target image corresponding to the target scene. Therefore, the line elements in the finally rendered image can be ensured to be normally displayed.

Description

Rendering processing method, apparatus, device and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a rendering processing method, apparatus, device, and computer readable storage medium.
Background
The high-quality electronic map provides accurate service quality for travel and vehicle driving activities of people, and the digital twin map belongs to a high-quality map, and is mainly rendered in the map according to collected road information (such as road boundaries and lane lines) so as to strive for simulating a real scene and provide better map service quality. However, for the lane lines of the far-distance region, the phenomenon of line breakage may occur after rendering, but the line breakage part lane line view is not excessively focused since the use of the map is not affected.
In the related art, when the lane lines of the far-distance area are rendered, the display width of the lane lines is widened, and the lane lines of the far-distance area are ensured to be displayed uninterruptedly in the map.
In the research and practice process of the related technology, the inventor of the application finds that when the related technology renders the lane line of the far-distance area, the display width of the lane line of the far-distance area is widened, so that the abnormal display effect that the lane line width of the far-distance area is larger than that of the lane line of the near-distance area in the rendering result is caused, and if the widening of the lane line of the far-distance area is insufficient, the phenomenon that the lane line of the far-distance area is broken still is caused, and the quality of the map is influenced.
Disclosure of Invention
The embodiment of the application provides a rendering processing method, a rendering processing device, rendering processing equipment and a computer readable storage medium, which can ensure that line elements in an image obtained by rendering can be normally displayed and improve the image quality.
The embodiment of the application provides a rendering processing method, which comprises the following steps:
performing geometric processing on vertex data of vertexes on a strip-shaped element in a target scene to obtain screen coordinates of all vertexes on the strip-shaped element;
performing fragment matching in a screen space according to screen coordinates of each vertex on the strip-shaped element, and determining fragment information of each target fragment representing the strip-shaped element, wherein the fragment information indicates a target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertex on the strip-shaped element; the target primitive refers to a pixel which has an intersection with the target primitive in a screen space;
determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive;
performing fragment coloring according to fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering graph;
according to the weight corresponding to each target fragment, fusing the first color rendering graph and the second color rendering graph to obtain a target image showing a target scene; and the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
Accordingly, an embodiment of the present application provides a rendering processing device, including:
the processing unit is used for performing geometric processing on vertex data of vertexes on the strip-shaped elements in the target scene to obtain screen coordinates of all vertexes on the strip-shaped elements;
the matching unit is used for carrying out fragment matching in a screen space according to the screen coordinates of each vertex on the strip-shaped element, determining fragment information of each target fragment representing the strip-shaped element, wherein the fragment information indicates the target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertex on the strip-shaped element; the target primitive refers to a pixel which has an intersection with the target primitive in a screen space;
the determining unit is used for determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive to which the target primitive belongs;
the coloring unit is used for coloring the fragments according to the fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering diagram;
the fusion unit is used for fusing the first color rendering graph and the second color rendering graph according to the weight corresponding to each target fragment to obtain a target image of the presented target scene; and the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
In some embodiments, the determining unit is further configured to:
dividing each target fragment into an area array with row-column arrangement, and determining the total number of fragment subregions in the area array;
determining a target primitive sub-region with intersection with the target primitive in the region array;
determining the overlapping proportion between each target primitive and the target primitive according to the number of target primitive subregions in each region array and the total number;
and determining the overlapping proportion between each target primitive and the target primitive to be determined as the weight of the corresponding target primitive.
In some embodiments, the fusion unit is further configured to:
acquiring a first color value corresponding to each target pixel position in the first color rendering diagram;
acquiring a second color value of the target pixel position from the second color rendering diagram;
for each target pixel position, determining a first weighting coefficient for the first color value and a second weighting coefficient for the second color value according to the weight of the target pixel to which the target pixel position belongs;
for each target pixel position, weighting the first color value and the second color value of the target pixel position according to the first weighting coefficient and the second weighting coefficient to obtain a fusion color value of the target pixel position;
And obtaining a target image presenting the target scene according to the fusion color values of the target pixel positions.
In some embodiments, the fusion unit is further configured to:
for each target pixel position, taking the weight of the target pixel element to which the target pixel position belongs as a first weighting coefficient of a first color value corresponding to the target pixel position;
and determining a second weighting coefficient of a second color value corresponding to the target pixel position according to the weight of the target pixel to which the target pixel position belongs, wherein the sum of the first weighting coefficient of the first color value corresponding to the same target pixel position and the second weighting coefficient of the second color value corresponding to the same target pixel position is 1.
In some embodiments, the matching unit is further configured to:
determining a target pixel with an intersection with a target primitive to which each vertex belongs in the screen space according to the screen coordinates of each vertex on the strip-shaped element;
taking the target pixel as a target fragment, and determining fragment information of each target fragment according to the position information of the target pixel and the vertex information of each vertex in the target fragment intersected with the target pixel; the vertex information is obtained by performing geometric processing on vertex data of the vertex.
In some embodiments, the rendering processing apparatus further includes a detection unit configured to:
obtaining a target depth map, wherein the target depth map is obtained by performing texture rendering according to vertex data of vertexes on other elements except the strip-shaped elements in the target scene;
performing depth detection on each target fragment according to the target depth map to obtain a depth detection result of each target fragment;
determining the target fragment passing the detection result as a fragment to be processed;
the determining unit is further configured to:
and determining the weight of each to-be-processed primitive according to the overlapping proportion between each to-be-processed primitive and the target primitive.
In some embodiments, the detection unit is further configured to:
obtaining a depth value of a pixel position where each target fragment is located from the target depth map, and taking the depth value as a depth comparison value of each target fragment;
comparing the depth comparison value of each target fragment with the target depth value of the target fragment;
if the target depth value of the target fragment is larger than the corresponding depth contrast value, determining that the depth detection result of the target fragment is passing detection;
And if the target depth value of the target fragment is smaller than the corresponding depth contrast value, determining that the depth detection result of the target fragment is that the detection is not passed.
In some embodiments, the processing unit is further configured to:
constructing a plurality of target primitives forming the strip-shaped element according to vertex data of vertexes on the strip-shaped element in the target scene, wherein each target primitive consists of a plurality of vertexes;
converting world coordinates of each vertex associated with each initial primitive into clipping coordinates;
and converting the clipping coordinates of each vertex in the vertex combination corresponding to each target primitive into a screen space to obtain the screen coordinates of each vertex on the strip-shaped element.
In some embodiments, the coloring unit is further configured to:
determining a first color value of each target fragment according to fragment information of each target fragment;
determining the transparency of each target fragment according to the weight of each target fragment;
and obtaining a first color rendering graph according to the first color value and the transparency of each target fragment.
In some embodiments, the rendering processing apparatus further comprises a rendering unit for:
acquiring scene data of the target scene, wherein the scene data comprises vertex data of vertexes on all elements in the target scene;
Marking materials of vertex data of vertexes on the strip-shaped elements in the scene data;
and rendering according to vertex data which is not marked by the materials in the scene data, so as to obtain a second color rendering chart.
In addition, the embodiment of the application also provides a computer device, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for running the computer program in the memory to realize the steps in any of the rendering processing methods provided by the embodiment of the application.
In addition, the embodiment of the application further provides a computer readable storage medium, which stores a plurality of instructions adapted to be loaded by a processor to perform the steps in any of the rendering processing methods provided in the embodiment of the application.
In addition, embodiments of the present application also provide a computer program product comprising computer instructions that, when executed, implement steps in any of the rendering processing methods provided in the embodiments of the present application.
When target scene data to be rendered is obtained, firstly, for a strip-shaped element (such as a line element) which is easy to generate discontinuous display abnormality after rendering, obtaining vertex data of each vertex on the strip-shaped element, and performing geometric processing to convert each vertex on the strip-shaped element into a screen space; then, performing fragment configuration in a screen space according to the target primitives to which each vertex belongs so as to determine target fragments intersected with the primitives of the strip-shaped elements; then, determining the weight of each target pixel according to the overlapping proportion between each target pixel and the target pixel, so as to color the target pixel belonging to the strip element according to the weight of each target pixel, thereby obtaining a first color rendering graph and realizing the independent rendering treatment of the strip element which is easy to generate the intermittent display effect in the target scene; and finally, fusing the first color rendering diagram with the color rendering diagrams of other elements in the target scene according to the weight of each target fragment so as to obtain a target image corresponding to the target scene. Therefore, the strip-shaped elements which are easy to generate discontinuous display abnormality in the scene can be separated from other elements in the scene to be independently rendered, wherein the independent rendering combines the weight of each element with intersection, so that the normal display of the strip-shaped elements is ensured, and the fusion of the independently rendered image and the images of other elements is performed, so that the line elements in the finally rendered image can be normally displayed, and the image quality is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a scene of a rendering processing system provided in an embodiment of the present application;
FIG. 2 is an exemplary diagram of a road scene provided by an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a display effect of a subpixel line according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating steps of a rendering method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of the structural relationship between primitives and vertices in lane line elements according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a region array of target primitives according to an embodiment of the present disclosure;
fig. 7 is a schematic view of a display effect of a lane line in a road scene after rendering processing according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating another step of the rendering processing method according to the embodiment of the present application;
fig. 9 is a schematic scene flow diagram of a rendering processing method according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a rendering processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In some of the processes described in the specification, claims and drawings above, a number of steps occurring in a particular order are included, but it should be understood that the steps may be performed out of order or performed in parallel, the sequence numbers of the steps merely being used to distinguish between the various steps, the sequence numbers themselves not representing any order of execution. Furthermore, the descriptions of "first" and "second" and the like herein are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Embodiments of the present application provide a rendering processing method, apparatus, device, and computer-readable storage medium. Specifically, the embodiments of the present application will be described from the dimension of a rendering processing apparatus, which may be specifically integrated in a computer device, where the computer device may be a server, or may be a device such as a user terminal. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms. The user terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent sound box, a smart watch, an intelligent home appliance, a vehicle-mounted terminal, an intelligent voice interaction device, an aircraft, and the like.
It should be noted that, in the specific embodiments of the present application, related data (such as "map" related information in the following) of user information, user usage records, user conditions, and the like are referred to, and when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and collection, use, and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It should be noted that, the rendering processing method provided in the embodiment of the present application may be applicable to scenes in which advertisements are inserted in live stream data, and these scenes are not limited to those through cloud technology,
Or in combination, etc., specifically by the following examples:
cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. Specifically, the cloud technology is based on the general terms of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Cloud computing (clouding) is a computing model that distributes computing tasks across a large pool of computers, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed. As a basic capability provider of cloud computing, a cloud computing resource pool (cloud platform for short, generally referred to as IaaS (Infrastructure as a Service, infrastructure as a service) platform) is established, in which multiple types of virtual resources are deployed for external clients to select for use.
According to the logic function division, a PaaS (Platform as a Service ) layer can be deployed on an IaaS (Infrastructure as a Service ) layer, and a SaaS (Software as a Service, software as a service) layer can be deployed above the PaaS layer, or the SaaS can be directly deployed on the IaaS. PaaS is a platform on which software runs, such as a database, web container, etc. SaaS is a wide variety of business software such as web portals, sms mass senders, etc. Generally, saaS and PaaS are upper layers relative to IaaS.
The rendering processing service referred to in the embodiments of the present application may be implemented by cloud computing. The rendering service can be applied to scenes such as maps, intelligent traffic, auxiliary driving, metauniverse and the like, and is specifically described by the following embodiments:
for example, referring to fig. 1, a schematic view of a scene of a rendering processing system provided in an embodiment of the present application may include a server and/or a terminal; when the device in the system includes only the server or the terminal, the server or the terminal may directly execute the rendering processing method of the embodiment of the present application; when the system is a combination of the terminal and the server, the rendering processing method of the embodiment of the application can be executed by the mutual cooperation of the terminal and the server.
Specifically, when the device in the system only includes a server or a terminal, the server or the terminal can perform geometric processing on vertex data of vertices on the strip-shaped element in the target scene to obtain screen coordinates of each vertex on the strip-shaped element; performing fragment matching in a screen space according to screen coordinates of each vertex on the strip-shaped element, and determining fragment information of each target fragment representing the strip-shaped element, wherein the fragment information indicates target primitives to which the target fragment belongs, and the target primitives are constructed according to the vertices on the strip-shaped element; the target primitive refers to a pixel which has an intersection with the target primitive in the screen space; determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive; performing fragment coloring according to fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering graph; according to the weight corresponding to each target fragment, fusing the first color rendering graph and the second color rendering graph to obtain a target image showing a target scene; the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
As another example, taking a system of a terminal and a server as an example, a communication connection is established between the terminal and the server. The terminal is provided with a target client, and the terminal can acquire target scene data through the target client and send the target scene data to the server, specifically, vertex data of each vertex on various elements in the target scene. The server can perform geometric processing on vertex data of vertexes on the strip-shaped elements in the target scene to obtain screen coordinates of all vertexes on the strip-shaped elements; performing fragment matching in a screen space according to screen coordinates of each vertex on the strip-shaped element, and determining fragment information of each target fragment representing the strip-shaped element, wherein the fragment information indicates target primitives to which the target fragment belongs, and the target primitives are constructed according to the vertices on the strip-shaped element; the target primitive refers to a pixel which has an intersection with the target primitive in the screen space; determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive; performing fragment coloring according to fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering graph; according to the weight corresponding to each target fragment, fusing the first color rendering graph and the second color rendering graph to obtain a target image showing a target scene; the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
The rendering processing method of the embodiment of the application can be applied to rendering processing scenes of any scene image in the map field. Specifically, taking the rendering process of a certain scene image of the twin map as an example, the scene includes elements such as a road, environmental objects (such as trees and traffic signal marks) located on two sides of the road, guiding marks (such as lane lines, lane boundaries and lane driving indication marks) drawn on the road, all the elements in the scene can be collected through an image capturing device to obtain scene data, and in combination with fig. 2, a certain road scene image is shown, and each element in the scene is rendered and drawn. It should be noted that, because the line elements such as the lane line and the lane boundary in the area far away from the image capturing device (the line elements are understood to be the "strip elements" in the embodiment of the present application) are easily broken (not coherent) in the image after the subsequent rendering, this is mainly the tiny capturing of the long and narrow line elements in the area far away from the image capturing device, so that the line elements in the far away area cannot cover one pixel during rendering, this case is called a "subpixel", so that a part of pixels do not work or are not displayed, the broken line and the discontinuous phenomenon of the line elements occur, for example, for the area marked by the dashed line frame in fig. 2 is the area far away from the image capturing device, the lane line of the area is originally a solid line, the actual display effect thereof is shown in fig. 3, it can be known from fig. 3, the lane line appears in the case of broken pixels as if the lane line is a broken line, as is generally, it is to be explained that the example, the broken line interval of the lane line may be irregular, and the broken line is specific to the actual situation. In this regard, in this example, the individual rendering process is performed for all the line elements in the scene, that is, other elements in the scene are independently rendered once, and all the line elements are combined and independently rendered once. When performing independent rendering processing on all line elements in a scene, firstly, performing geometric processing on vertex data of each vertex on the line elements, wherein the aggregate processing is not limited to primitive construction, vertex coordinate conversion to a screen space and the like so as to determine the screen coordinate of each vertex of the line elements in the screen space, and after each vertex is converted to the screen space, the corresponding vertex information can also comprise normal lines, color values, other vertex positions (coordinates) of the primitives to which the vertex belongs, and the like; then, based on the screen coordinates of each vertex of the line element, performing patch configuration on target primitives forming the line element in a screen space to respectively find out target patches intersected (fully covered or partially overlapped) by each target primitive, wherein each target patch can directly influence the representation of the line element primitive; furthermore, in order to ensure that each target pixel can accurately represent the color of the corresponding position of the pixel, the overlapping proportion of each target pixel and the target pixel to which the target pixel belongs can be determined, the overlapping proportion can be understood as the ratio between the covered area of the target pixel and the total area of the target pixel, the overlapping proportion associated with each target pixel is taken as the weight of the target pixel, the weight can be taken as the transparency of the target pixel, and then the target pixel corresponding to the line element can be colored according to the weight of each target pixel so as to obtain a first color rendering diagram aiming at the line element in the scene; and finally, according to the weight of each target fragment, fusing the first color rendering diagram of the line element in the scene with the second color rendering diagram of other elements to write each color value of the line element in the first color rendering diagram into the second color rendering diagram so as to obtain a final target image of the target scene, wherein the target image can be used as map data in the corresponding scene in the twin map.
The rendering processing method of the embodiment of the application can be applied to rendering processing scenes of any scene image in the metauniverse. Specifically, for any scene in the digital world of the meta universe, when the scene contains a linear element to be rendered, the linear element can be subjected to independent rendering processing, for example, vertex data of the linear element is subjected to geometric processing, and is converted into a screen space, and screen coordinates of each vertex of the linear element in the screen space are determined, so that the elements are configured for each target element (which can be a triangle) forming the linear element according to the screen coordinates of each vertex, and a target element with an intersection with each target element is obtained; for each target element, determining the overlapping area between the target element and the target element, namely determining how much area of the target element is covered and overlapped by the target element, taking the determined overlapping proportion as the weight of the target element, wherein the weight can represent the transparency of the current target element; after the weight of each target primitive is determined, the primitive coloring can be performed according to the weight of each target primitive and the color value in the vertex information corresponding to each target primitive, so that coloring of each target primitive forming the linear element is realized, and a first color rendering diagram is obtained; in addition, the present example further performs rendering processing on other elements in the scene to obtain a second color rendering map, and for a color value corresponding to each pixel position (specifically, a pixel position in each target primitive that represents a line element) in the first color rendering map, blends the color value into a corresponding pixel position in the second color rendering map, to obtain a target image corresponding to the current scene in the universe.
It should be noted that the above is only an example, and may be applied to other rendering scenes, which is not described herein.
For ease of understanding, each step of the rendering processing method will be described in detail below, respectively. The order of the following examples is not limited to the preferred order of the examples.
In the embodiments of the present application, description will be made from the dimension of the rendering processing apparatus, and the rendering processing apparatus may be specifically integrated in a computer device, such as a terminal or a server. Referring to fig. 4, fig. 4 is a schematic step flow diagram of a rendering method according to an embodiment of the present application, where in the embodiment of the present application, a rendering device is specifically integrated on a server, and when a processor on the server executes a program instruction corresponding to the rendering method, the specific flow is as follows:
101. and performing geometric processing on vertex data of vertices on the strip-shaped elements in the target scene to obtain screen coordinates of each vertex on the strip-shaped elements.
In the embodiment of the application, the rendering method can be suitable for rendering images in twin maps, metauniverse, auxiliary driving, intelligent traffic and other scenes, and specifically can be used for independent rendering of strip-shaped elements (such as long and narrow lane lines in roads, long and narrow strips in metauniverse and the like) in the scenes, which are separated from other elements in the scenes, so as to ensure normal rendering and display of the strip-shaped elements. For this reason, in the embodiment of the application, firstly, the strip-shaped elements in the scene need to be extracted separately for independent rendering processing.
The target scene may be a real scene or a virtual scene, for example, a real scene such as a road, a building, an indoor site, or a virtual scene such as a game world or a meta universe. It should be noted that, the target scene is a scene to be rendered, specifically, the target scene includes a plurality of elements to be rendered, and an element can be understood as an independent individual, such as a lane line, a road boundary, a plant, etc., and is rendered and drawn into an image according to all the elements in the target scene. For example, taking a road scene as an example, the road scene includes road elements, environment elements (such as plants and traffic signals) located at two sides of the road, boundary line elements and lane line elements located on the road, and when the road scene is rendered, the road, the environment elements, the boundary line elements, the lane line elements and other all elements in the road scene are mainly rendered, so as to generate an image corresponding to the road scene.
Wherein, a vertex is a unit or element that can be the smallest granularity constituting one element, while an element can be composed of a plurality of primitives, and a vertex is a unit or element that can be the smallest granularity constituting one primitive. For example, a triangle may be a primitive that is the smallest unit of any one element, three vertices make up a triangle (primitive), and multiple triangle primitives make up an element, so each element in the target scene is made up of a series of vertices. Taking a lane line element in a road scene as an example, referring to fig. 5, a schematic structural diagram of the lane line element provided in an embodiment of the present application is shown, where three vertices form a triangle, and the longest sides of two identical triangles overlap to form a lane line segment, and multiple lane line segments are spliced to form the lane line element in the target scene.
It should be noted that, each vertex on any element in the scene has corresponding vertex data, and each vertex data is not limited to information including a position (such as world coordinates), a normal line, a color, a two-dimensional texture coordinate (uv), and the like of the vertex, and may further include other information used for computing by the graphics processing unit (Graphic Processing Unit, GPU).
The geometric process may be conversion of vertex coordinates under different spaces, such as conversion of world coordinates of vertices under projection level, clipping level, screen space, and the like. Specifically, during the rendering process, world coordinates of vertices are converted to clipping coordinates by a vertex shader, and clipping coordinates of each vertex are converted to screen coordinates in screen space by a geometry shader. Taking a lane line element in a road scene as an example, after scene data of the road scene is acquired, obtaining relevant data of each element, such as lane line element data, wherein the lane line element data can comprise vertex data corresponding to each vertex on the lane line element, and each vertex on the lane line element of the data has independent world coordinates; further, in the first step, world coordinates of the vertex are converted into clipping coordinates by a vertex shader (a shader that performs some column operations on the vertex), and in the second step, clipping coordinates of each vertex in a group of vertices corresponding to each primitive are converted into screen space by a geometry shader, so that screen coordinates of each vertex in the screen space are obtained, thereby completing the processing in the geometry stage.
The screen coordinates may be coordinates of a vertex on any element in the target scene in the screen space. For example, in the screen space, any one of the lower left corner, upper right corner, lower left corner, the screen space center point, or the screen space may be taken as the origin of the screen coordinate system, and the world coordinates of each vertex on any element in the target scene are represented on the screen coordinate system when they are converted into the screen space.
In some embodiments, for vertices on a strip element in a target scene, a constructed target primitive may be constructed first, through a series of vertex coordinate transformations in a geometric processing stage, to transform each vertex on the strip element into screen space. For example, step 101 may include: constructing a plurality of target primitives forming the strip-shaped element according to vertex data of vertexes on the strip-shaped element in the target scene, wherein each target primitive consists of a plurality of vertexes; converting world coordinates of each vertex associated with each initial primitive into clipping coordinates; and converting the clipping coordinates of each vertex in the vertex combination corresponding to each target primitive into a screen space to obtain the screen coordinates of each vertex on the strip-shaped element.
The target primitive may be a unit to be rendered composed of one or more vertices, for example, one vertex, one line, a triangle, and a polygon may be used as one primitive. In the embodiment of the present application, the target primitive may be a triangle composed of three vertices, which is the minimum unit of rendering processing of the embodiment of the present application. The plurality of target primitives form a strip element.
Specifically, after vertex data of vertices on the strip-shaped elements in the target scene are obtained, world coordinates of corresponding vertices can be determined according to the vertex data, and any three adjacent or similar vertices in position are determined as a vertex combination according to the world coordinates of each vertex in the strip-shaped elements.
Further, among several processes, for each vertex data, a series of processes may be performed thereon by the vertex shader, for example, the vertex shader may perform operations including transformation and deformation on each vertex to achieve modification, creation, or neglecting of related attributes of the vertex, the attributes of the vertex are not limited to including color, normal, texture coordinates, and position, and the like. In order to meet the subsequent data requirements, the vertex shader needs to convert the vertices from the model space to the uniform clipping space, that is, convert the world coordinates of each vertex to clipping coordinates, so as to be used as the data of the next step.
Finally, in several processes, the process of converting clipping coordinates of each vertex to screen coordinates is also required, which is mainly implemented by a geometry shader. Specifically, the (vertex data of the) three vertices in the vertex combination corresponding to each target primitive is used as the input of the geometry shader, so that the geometry shader can obtain the target primitive information of each vertex, the target primitive information can be understood as triangle information, the triangle information can include the data of the three vertices corresponding to the vertex combination, for example, the triangle information is not limited to the attributes including clipping coordinates, positions, normals and the like of the three vertices, and the triangle information is not limited herein; further, for the vertex combinations constituting each target primitive, the geometry shader converts three vertices in the vertex combinations into the screen space to realize conversion of clipping coordinates of each vertex in the vertex combinations into screen coordinates in the screen space, and loads screen coordinates into vertex data of each vertex, so that, for each vertex in the screen space, the position of the vertex in the screen space can be determined according to the vertex data of each vertex, and the target primitive to which each vertex belongs and other vertices that together constitute one target primitive can be determined according to the vertex data of each vertex.
Through the method, the vertex data of each vertex on the strip-shaped element can be extracted independently for the strip-shaped element in the target scene, a plurality of target primitives forming the strip-shaped element are constructed, and the vertices corresponding to the target primitives are subjected to coordinate conversion to obtain the screen coordinates of each vertex in the screen space.
102. And performing fragment matching in a screen space according to the screen coordinates of each vertex on the strip-shaped element, and determining fragment information of each target fragment representing the strip-shaped element.
In this embodiment of the present application, after each vertex on the strip element is converted to the screen space, the target primitive to which each vertex belongs may be rasterized in the screen space to configure a plurality of target primitives having intersections with each target primitive, and after the configuration of the target primitives, each target primitive may carry vertex information of the target primitive having intersections with the target primitive, so as to indicate that the target primitive is responsible for performing subsequent coloring on the target primitive to which the target primitive belongs.
The target primitive refers to a pixel that intersects with the target primitive in the screen space, which can be understood as a candidate of a pixel at a position in the screen space that intersects with the target primitive, that is, the state of the pixel is ultimately determined by the target primitive.
The fragment information indicates a target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertexes on the strip-shaped elements. Specifically, the primitive information of each target primitive includes vertex combination information of the target primitive to which the target primitive belongs, where the vertex combination information may specifically be vertex information of each vertex in the vertex combination, for example, taking a triangle as the target primitive, each target primitive includes vertex information of three vertices of the triangle to which the target primitive belongs, and the vertex information is not limited to including coordinates, positions, colors, normals, depth values, and the like.
In some embodiments, the screen coordinates of each vertex on each strip element may be used to determine the target primitive having an intersection with the target primitive to which each vertex belongs, and the primitive information of each target primitive may be determined according to the position of the target primitive and the vertex information of each vertex constituting the target primitive to which each vertex belongs. For example, step 102 may include:
(102.1) determining a target pixel which has an intersection with a target primitive to which each vertex belongs in a screen space according to the screen coordinates of each vertex on the strip-shaped element;
and (102.2) taking the target pixel as a target pixel, and determining the pixel information of each target pixel according to the position information of the target pixel and the vertex information of each vertex in the target pixel with intersection with the target pixel.
Wherein the vertex information is obtained by performing geometric processing on vertex data of the vertex, and is not limited to screen coordinates, depth, normal line, texture and the like.
Specifically, according to the screen coordinates of each vertex on each strip element, determining the target position of each vertex in the screen, and according to the target primitive formed by each vertex, a plurality of target pixels with intersections exist in the screen space, and after determining that the target pixel with an intersection exists, the target pixel can be understood as the target pixel before coloring, so that the allocation of the target primitive with each vertex on the strip element in the screen space is completed. Further, in order to ensure that each target pixel is accurately responsible for the coloring work of the target pixel to which the target pixel belongs, vertex information of the target pixel to which the target pixel belongs needs to be determined and recorded in each pixel, specifically, the position information of each target pixel is taken as the position of the corresponding target pixel, and a plurality of target pixels intersected by each target pixel are determined based on the position of each target pixel in a screen space; and aiming at a plurality of target primitives intersected by each target primitive, taking the vertex information of the vertexes forming the target primitive as the primitive information of each target primitive, and writing or binding the primitive information into each target primitive so that the target primitive carries the vertex information of the vertex combination of the target primitive.
In the above manner, the target primitives to which the vertices belong can be rasterized in the screen space to configure a plurality of target primitives having intersections with each target primitive, and the vertex information of the target primitive intersected by each target primitive is recorded to each target primitive, so that each target primitive carries the vertex information of the target primitive intersected by the target primitive, and the target primitive is indicated to be responsible for subsequent coloring of the target primitive to which the target primitive belongs.
103. And determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive.
In this embodiment of the present application, after configuring a target primitive having an intersection for a target primitive formed by vertices of a strip element, in order to avoid the problems of truncation, incomplete display, and the like of the strip element after rendering is completed, it is required to ensure that each configured target primitive can color the target primitive to which it belongs, and a factor affecting the coloring effect may be transparency of the target primitive, where transparency may be determined by an overlapping ratio between the target primitive and the target primitive. Therefore, before each target primitive is colored, the embodiment of the application needs to determine the overlapping proportion between each target primitive and the target primitive to acquire the weight of each target primitive.
The overlapping proportion may be an area overlapping ratio between each target primitive and the target primitive, and specifically, the overlapping proportion may be a ratio between an overlapping area when the target primitive intersects with the target primitive and a total area of the target primitive. It should be noted that, the overlapping proportion associated with each target primitive may be used as the weight value of the target primitive to represent the transparency of the target primitive.
In some embodiments, to improve the accuracy of the weight value of each target primitive, each target primitive may be divided into a region array including a plurality of primitive sub-regions, so as to determine the overlapping proportion of each target primitive according to the number of target primitive sub-regions that generate intersections with the target primitive in the region array, thereby obtaining the weight of each target primitive. For example, step 103 may include:
(103.1) dividing each target fragment into an area array with rows and columns, and determining the total number of fragment subregions in the area array;
(103.2) determining a target primitive sub-region in the region array having an intersection with the target primitive;
(103.3) determining the overlapping proportion between each target primitive and the target primitive according to the number and the total number of the target primitive subregions in each region array;
(103.4) determining the overlapping proportion between each target primitive and the target primitive as the weight of the corresponding target primitive.
The area array may be formed by arranging a plurality of sub-areas of the chip element, and the number of sub-areas of the chip element included in the rows and columns of the area array may be equal or unequal, for example, a target chip element may be divided into an area array of the row and column arrangement, and the specification of the equal division may be 2×2, 2*3, 3*3, 4*4, 8×8, etc. so as to obtain an area array of the corresponding specification. For example, referring to fig. 6, fig. 6 includes 9 target tiles, where the center position of each target tile is the pixel center of the target tile, and taking the middle target tile as an example, the target tile is divided according to the 4*4 specification, so as to obtain a 4*4-specification area array, that is, the area array includes 4 rows, and each row includes 4 sub-areas of the target tile. The foregoing is merely exemplary and is not intended to limit the embodiments of the present application.
For example, as shown in connection with fig. 6, for each target primitive, the target primitive may be divided into an area array of 4*4, where the area array includes 16 primitive sub-areas, taking one target primitive in the middle of the graph as an example, the number of primitive sub-areas intersecting with the target primitive in the area array is counted, and in fig. 6, the number of target primitives to which the middle target primitive belongs is two, where the two target primitives (triangles) form a strip-shaped element segment, and the strip-shaped element segment occupies 5 primitive sub-areas on the lower right side in the middle area array, where the overlapping ratio is 5/16, and the weight of the middle target primitive is 5/16. In fig. 6, although there is no intersection between the pixel center and the target primitive to which the pixel center belongs, the weight is determined by the overlapping ratio between the target primitive and the target primitive to indicate the subsequent transparency of the target primitive, so that the target primitive is indicated to color the target primitive to which the target primitive belongs in the subsequent process, the phenomenon of sub-pixels of the strip-shaped element is effectively avoided, and the reliability is provided.
In some embodiments, since the display of the elements in the scene image in which the overlapping (occlusion) relationship exists is mainly determined according to the depth test result, the elements used for coloring the target primitive can be screened out through the depth test, so as to ensure the accuracy of the strip-shaped elements when displayed in the scene image. For example, before step 103, it may further include:
(103. A.1) obtaining a target depth map;
(103. A.2) performing depth detection on each target fragment according to the target depth map to obtain a depth detection result of each target fragment;
(103. A.3) determining the target fragment with the depth detection result of passing detection as a fragment to be processed;
step 103 may further include: and determining the weight of each to-be-processed primitive according to the overlapping proportion between each to-be-processed primitive and the target primitive.
The target depth map may be obtained by performing texture rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene, where depth values of pixel position points representing other elements of the target scene in the screen space are stored on the target depth map. Specifically, for the acquired scene data of the target scene, vertex data of vertices on other elements except the strip-shaped element can be collected from the scene data, the collection of vertex data of each vertex on the strip-shaped element is skipped in the collection process, after the collection is completed, rendering can be performed based on the collected vertex data of the vertices on the other elements, so as to obtain rendering textures, namely a target depth map, of the target scene except the strip-shaped element, and depth values of each pixel on the other elements can be determined from the target depth map.
The depth detection refers to comparing the depth of the target primitive with the depth value of the same pixel position point in the target depth map, if the depth of the target primitive is greater than the depth value of the same pixel position point of other elements in the target depth map, the depth detection of the target primitive is passed, otherwise, the depth detection is not passed.
It should be noted that, any element in the target scene belongs to an element in the three-dimensional space, and the rendering process is to convert any element in the target scene that belongs to the three-dimensional space into a two-dimensional image, so as to generate an image and store the image in the frame buffer, where each position in the frame buffer stores a color value. In converting any element belonging to the three-dimensional space in the target scene into the two-dimensional image, since different elements may overlap in the same position point, each position point of the two-dimensional image may be mapped multiple times, that is, one position point may be covered by multiple fragments, and then for each position point, it is possible to determine which fragment the position point is colored by through depth detection. Therefore, the present embodiment may perform depth detection on each target primitive based on the target depth map, so as to screen out the target primitives passing through the detection, i.e. the primitives to be processed, according to the depth detection result of each target primitive. Thereafter, for the to-be-processed primitives passing the depth detection, the weight of each to-be-processed primitive may be determined according to the overlapping proportion between each to-be-processed primitive and the target primitive to which it belongs.
In the embodiment of the present application, when the depth detection is performed on the target primitives according to the depth map, each target primitive can generally pass through the depth detection and serve as a primitive to be processed, and another target of the depth detection may be writing the depth value of the target primitive into a corresponding position point in the depth buffer. Specifically, a Graphics Processing Unit (GPU) includes a frame buffer and a depth buffer corresponding to the frame buffer, where positions in the depth buffer correspond to positions of the frame buffer one by one, color values of the primitives are stored in the frame buffer, depth values of the primitives are stored in the depth buffer, depth detection is performed on each target primitive associated with a target primitive of a strip element, and depth values of the target primitives passing through the depth detection are written into corresponding position points in the depth buffer, so as to avoid shielding of the strip element by other elements in a scene image.
In some embodiments, the target depth value of the target pixel may be compared with the depth value of the pixel position of the target pixel in the target depth map to determine the depth detection result. For example, step (103. A.2) may include: obtaining a depth value of a pixel position where each target fragment is located from a target depth map, and taking the depth value as a depth comparison value of each target fragment; comparing the depth comparison value of each target fragment with the target depth value of the target fragment; if the target depth value of the target fragment is larger than the corresponding depth contrast value, determining that the depth detection result of the target fragment is passing detection; if the target depth value of the target fragment is smaller than the corresponding depth contrast value, determining that the depth detection result of the target fragment is that the detection is not passed.
Specifically, for each target primitive having an intersection with a target primitive of a strip element, a depth value of a pixel position point identical to the target primitive may be obtained from a target depth map, so as to serve as a depth contrast value of the target primitive; further, a target depth value of each target primitive is obtained, specifically, a target primitive to which each target primitive belongs may be determined based on primitive information of each target primitive, a depth value of each vertex is determined based on vertex information of each vertex in a vertex combination of the target primitive, and a target depth value of each target primitive may be determined according to the depth values of each vertex, and generally, in the strip-shaped elements in the embodiment of the present application, the depth of each vertex constituting each target primitive is consistent, and then the depth value of any vertex in the target primitive to which each target primitive belongs may be used as the target depth value of the target primitive; and finally, comparing the target depth value of each target fragment with the corresponding depth comparison value, and determining that the depth detection result of the target fragment is the depth detection passing value when the target depth value of the target fragment is larger than the corresponding depth comparison value, otherwise, determining that the depth detection result of the target fragment is the depth detection failing value.
According to the method, depth detection can be carried out on each target fragment, the target fragments passing through the depth detection are divided into the area array according to the specific fine granularity rule, the plurality of fragment subareas contained in the area array are distributed, the weight of the target fragment corresponding to the area array is determined according to the number of target fragment subareas where the area array and the target fragment belong to intersect, and the weight is used as the transparency of the strip-shaped element in coloring, so that each target fragment of the strip-shaped element is ensured to be normally colored subsequently, the problems that the strip-shaped element is cut off and is incompletely displayed after rendering is completed are effectively avoided, the quality of the strip-shaped element is ensured, and the reliability is realized.
104. And coloring the fragments according to the fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering graph.
In the embodiment of the application, after the weight of each target fragment is determined, the strip-shaped elements in the target scene can be subjected to independent coloring processing so as to obtain a first color rendering diagram aiming at the strip-shaped elements in the target scene.
The first color rendering diagram at least comprises strip-shaped elements obtained through rendering processing. Illustratively, taking a road scene as an example, the road scene includes road elements, environment elements (such as plants, traffic signals signs and the like) located on two sides of the road, boundary line elements, lane line elements and the like located on the road; the border line element and the lane line element may be respectively used as a strip element, so after the weight of each target pixel of the intersection of the border line element and the lane line element is obtained, the border line element and the target pixel related to the lane line element in the target scene may be colored according to the weight of the target pixel, so as to obtain a first color rendering diagram, where the first color rendering diagram includes the border line element and the lane line element obtained by the rendering process.
In some embodiments, the primitive information of each target primitive may further include a color value of the target primitive in addition to the vertex information of the vertices that make up the target primitive, and the target primitive may be colored according to the color value and the weight of each target primitive, for example, step 104 may include: determining a first color value of each target fragment according to fragment information of each target fragment; determining the transparency of each target fragment according to the weight of each target fragment; and obtaining a first color rendering graph according to the first color value and the transparency of each target fragment.
Specifically, in the coloring stage of the strip-shaped element, the first color value of each target fragment can be determined according to the fragment information of the target fragment, and the weight of each target fragment is used as the transparency of the target fragment, so that in the coloring stage, fragment coloring is performed according to the first color value and the transparency of each target fragment. In the process of coloring the target fragments, coloring can be performed on each target fragment according to RGBA channels, RGB represents red, green and blue color channels, A (Alpha) represents transparency channels, the Alpha channels and the RGB channels are parallel, coloring of each target fragment is realized according to the color (RGB) channels and the transparency (A) channels based on the first color value and the transparency of each target fragment, so as to obtain a first color rendering diagram aiming at strip-shaped elements, and the strip-shaped elements are normally displayed in the first color rendering diagram. Thereafter, a weight value for each target primitive may also be stored for use in fusing the first color rendering map with the color rendering maps of other elements in the scene.
According to the method, the weight of each target fragment is used as the transparency of the target fragment, the first color value of the target fragment is combined, the corresponding position of the target fragment to which the target fragment belongs is colored, and when all the target fragments are colored, a first color rendering diagram aiming at the strip-shaped elements in the target scene is obtained, so that each target fragment of the intersection of the strip-shaped elements is normally colored in the first color rendering diagram, the quality of the strip-shaped elements is ensured, and the reliability is realized.
105. And fusing the first color rendering graph and the second color rendering graph according to the weight corresponding to each target fragment to obtain a target image of the presented target scene.
In the embodiment of the application, after the first color rendering diagram aiming at the strip-shaped elements in the target scene is obtained, the second color rendering diagram aiming at other elements in the target scene is required to be obtained, so that the first color rendering diagram and the second color rendering diagram are fused according to the weight of each target fragment, the first color value of each target pixel position of the strip-shaped elements in the first color rendering diagram is written into the second color rendering diagram, so that a target image of the whole element of the target scene is obtained, the target image completely displays the strip-shaped elements in the target scene, the cut-off problem is avoided even if the strip-shaped elements in a far area are not generated, the quality of the strip-shaped elements in the image rendering process is ensured, and the image visual effect corresponding to the target scene is improved.
The second color rendering graph is rendered according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
The target image is an image containing all elements in the target scene, and aiming at strip-shaped elements in the target scene, color fusion is mainly carried out on the strip-shaped elements and other elements according to the element weights of all the positions of the strip-shaped elements, so that whether the strip-shaped elements are close to or far from the camera device or not, the strip-shaped elements are completely displayed in the target image, and the visual effect of the image is improved.
In some embodiments, a first color value corresponding to each target pixel position may be obtained from the first color rendering map, and a second color value corresponding to each passing target pixel position may be obtained from the second color rendering map of other elements in the target scene, where the first color value and the second color value corresponding to the target pixel position are fused according to each target pixel weight, so that a target image of the target scene may be obtained. For example, step 105 may include:
(105.1) acquiring a first color value corresponding to each target pixel position in the first color rendering diagram;
(105.2) obtaining a second color value for the target pixel location from the second color rendering map;
(105.3) for each target pixel location, determining a first weighting factor for the first color value and a second weighting factor for the second color value based on the weight of the target voxel to which the target pixel location belongs;
(105.4) weighting the first color value and the second color value of the target pixel location according to the first weighting coefficient and the second weighting coefficient for each target pixel location to obtain a fused color value of the target pixel location;
(105.5) obtaining a target image presenting the target scene according to the fusion color values of the target pixel positions.
The target pixel position may be a position of any pixel in the first color rendering map, or may be a position of any pixel representing a strip element, where the target pixel of the position has a corresponding color value. It will be appreciated that the color values between different pixel locations are independent of each other, i.e. the color values of different pixel locations may be different, but may also be the same, as the case may be.
The first color value may be a color value corresponding to each target pixel position in the first color rendering diagram, or may be understood as a pixel value, which includes values of three channels of red, green and blue, where the value range of each channel is [0, 255]. In addition, values for transparency channels may also be included, not limited herein. Similarly, the second color value may be a color value corresponding to each target pixel location in the second color rendering map.
The first weighting coefficient may be a weight of a target pixel corresponding to the target pixel position, specifically, the weight of the target pixel is directly used as the first weighting coefficient of the target pixel position, and the value range of the first weighting coefficient is [0,1]. It will be appreciated that the first weighting coefficients between different target pixel locations are independent of each other, and may reflect the transparency of the corresponding target pixel when rendered.
The second weighting coefficient can understand the weight of the second color value of the target pixel position in the second color rendering chart during fusion, which defines the transparency of the second color value of the target pixel position in the second color rendering chart during fusion, and the value range is [0,1]. It should be noted that, since the first color rendering map and the second color rendering map are fused, the second weighting coefficient needs to be determined in combination with the first weighting coefficient.
Specifically, a first color value representing each target pixel position of the bar element is obtained from the first color rendering map, and a second color value of each target pixel position is obtained from the second color rendering map. Further, based on the stored weight of each target pixel as the first weighting coefficient of the first color value of the corresponding target pixel position in the first color rendering map, since the first weighting coefficient affects the transparency of the first color value of the corresponding target pixel position, the matching of the transparency between the first color value and the second color value needs to be considered at the same time when fusing, and therefore, the second weighting coefficient can be determined according to the first weighting coefficient. Further, for each target pixel position, a first color value is multiplied by a first weighting coefficient to obtain a first result, a second color value is multiplied by a second weighting coefficient to obtain a second result, and the first result and the second result are added to obtain a fused color value. Finally, a target image representing each element of the target scene may be derived based on the fused color values for each target pixel location.
Exemplary, to generate a roadFor example, a twin map image of a scene, which includes road elements, environmental elements located on both sides of the road (such as plants, traffic signals signs, street lamps, etc.), boundary line elements located on the road, lane line elements, etc., generates a second color rendering map for the road elements included in the road scene and the environmental elements located on both sides of the road, and generates a first color rendering map for the boundary line elements and the lane line elements located on the road in the road scene, and sets the first color rendering map as map B; then, according to the weight of each target pixel in the first color rendering diagram when coloring, respectively determining a first weighting coefficient of a first color value of each target pixel position in the first color rendering diagram, and expressing the first weighting coefficient as B because the first weighting coefficient influences the transparency of the first color value of the corresponding pixel position Alpha And determining a second weighting factor of a second color value of the corresponding target pixel location according to the first weighting factor, denoted as A Alpha Further, a first color value for each target pixel location is obtained from the map B, defined as B color And obtaining a second color value for each target pixel location from the map A, defined as A color The method comprises the steps of carrying out a first treatment on the surface of the And (3) carrying out Color fusion on the graph A and the graph B, wherein the calculation process of the fusion Color value Color is as follows for each target pixel position: color=a color ×A Alpha +B color ×B Alpha . Finally, each fusion Color value 'Color' is written into a corresponding target pixel position in the graph A, so that a target image presenting each element in the road scene can be obtained. As shown in fig. 2, 3 and 7, for the lane line elements and the boundary line elements which are long and narrow in the road scene shown in fig. 2, the problem of line element truncation shown in fig. 3 is liable to occur for the lane line elements and the boundary line elements in the farther area, and the visual effect is affected. After the rendering processing in the embodiment of the present application, the lane line elements and the boundary line elements in the road scene can ensure the display quality no matter far or near to the image capturing device, and it should be noted that, for example, as shown in fig. 7, when the lane line elements and the boundary line elements are far from the image capturing device, the lane line elements can be fused according to the actual weight valuesAnd the color of a part of the target element of the boundary line element is displayed to display the corresponding color, so that the phenomenon that the display effect is truncated due to complete non-display is effectively avoided, the visual effect of the image is improved, and the reliability is realized.
In some embodiments, the first weighting factor and the second weighting factor for the same target pixel location are complementary such that the fused color value for each target pixel location in the target image is normally characterized, since the first weighting factor affects the transparency of the corresponding first color value when fused and the second weighting factor affects the transparency of the corresponding second color value when fused; and the first weighting factor and the second weighting factor for each target pixel location need to be calculated separately. For example, step (105.3) may include: for each target pixel position, taking the weight of the target pixel element to which the target pixel position belongs as a first weighting coefficient of a first color value corresponding to the target pixel position; and determining a second weighting coefficient of a second color value corresponding to the target pixel position according to the weight of the target pixel to which the target pixel position belongs, wherein the sum of the first weighting coefficient of the first color value corresponding to the same target pixel position and the second weighting coefficient of the second color value corresponding to the same target pixel position is 1.
The weight of the target pixel representing the strip element is determined according to the overlapping proportion of the target pixel and the target pixel, in order to avoid the phenomenon that the pixel center of the target pixel does not intersect with the target pixel and the target pixel does not work, the embodiment of the application takes the overlapping proportion of the target pixel and the target pixel as the coloring weight of the target pixel, takes the coloring weight as the first weighting coefficient when the target pixel is colored, and takes the first weighting coefficient as the transparency when the target pixel is colored according to the corresponding first color value Alpha The method comprises the steps of carrying out a first treatment on the surface of the Further, a second weighting factor for a second color value at the same pixel position in the second color rendering map is defined as A Alpha . When the first color value and the second color value at the same pixel position are fused, the second weighting coefficient A is given by the upper limit of the transparency value being 1 Alpha =1-B Alpha . And then, for each target pixel position, the first color value and the second color value can be subjected to weighted fusion according to the first weighting coefficient and the second weighting coefficient, so that the strip-shaped elements of each part are displayed as far as possible at the same pixel position, and the strip-shaped elements are ensured not to generate a cut-off phenomenon.
In this embodiment of the present application, a second color rendering map may be further rendered according to vertex data of vertices on elements other than the strip-shaped elements in the target scene, where the specific process is: acquiring scene data of a target scene, wherein the scene data comprises vertex data of vertexes on all elements in the target scene; marking materials of vertex data of vertexes on the strip-shaped elements in the scene data; and performing rendering processing according to vertex data which is not marked by the materials in the scene data to obtain a second color rendering chart.
It should be noted that, in order to render the stripe element and other elements in the target scene separately, material binding may be performed on vertex data of each vertex on the stripe element, so that corresponding material marks are bound on vertex data of each vertex on the stripe element, where the material may be understood as a configuration parameter when rendering each vertex of the stripe element, and the configuration parameter is not limited to configuration information including color values, rendering logic and the like. Further, when performing image rendering based on vertex data of vertices on each element in the target scene, on one hand, vertex data marked by materials can be selected to perform rendering processing of strip-shaped elements; on the other hand, vertex data which is not marked by materials can be selected from the target scene data to be rendered, so that the rendering of other elements in the target scene is realized, and a second color rendering diagram is obtained.
As can be seen from the above, when the target scene data to be rendered is obtained, the embodiments of the present application may firstly obtain vertex data of each vertex on a strip-shaped element (such as a line element) for which intermittent display abnormality easily occurs after rendering, and perform geometric processing to convert each vertex on the strip-shaped element into a screen space; then, performing fragment configuration in a screen space according to the target primitives to which each vertex belongs so as to determine target fragments intersected with the primitives of the strip-shaped elements; then, determining the weight of each target pixel according to the overlapping proportion between each target pixel and the target pixel, so as to color the target pixel belonging to the strip element according to the weight of each target pixel, thereby obtaining a first color rendering graph and realizing the independent rendering treatment of the strip element which is easy to generate the intermittent display effect in the target scene; and finally, fusing the first color rendering diagram with the color rendering diagrams of other elements in the target scene according to the weight of each target fragment so as to obtain a target image corresponding to the target scene. Therefore, the strip-shaped elements which are easy to generate discontinuous display abnormality in the scene can be separated from other elements in the scene to be independently rendered, wherein the independent rendering combines the weight of each element with intersection, so that the normal display of the strip-shaped elements is ensured, and the fusion of the independently rendered image and the images of other elements is performed, so that the line elements in the finally rendered image can be normally displayed, and the image quality is improved.
According to the method described in the above embodiments, examples are described in further detail below.
In the embodiment of the application, the test question image processing method provided in the embodiment of the application is further described by taking test question image processing as an example.
Fig. 8 is a flowchart illustrating another step of the method for processing a test question image according to the embodiment of the present application. For ease of understanding, embodiments of the present application are described in connection with fig. 8.
In the embodiment of the present application, description will be made from the viewpoint of a test question image processing apparatus, which may be integrated in a computer device such as a terminal or a server in particular. For example, when the processor on the computer device executes a program corresponding to the test question image processing method, the specific flow of the test question image processing method is as follows:
201. and obtaining scene data of the target scene, and marking the materials of vertex data of vertices on the strip-shaped elements in the scene data.
The target scene may be a real scene or a virtual scene, for example, a real scene such as a road, a building, an indoor site, or a virtual scene such as a game world or a meta universe.
The scene data comprises vertex data of vertices on elements in the target scene. For example, taking a road scene as an example, the road scene includes a road element, environment elements (such as plants, traffic signals, street lamps, etc.) located on two sides of the road, a boundary line element located on the road, and a lane line element, and the scene data may include the road element, the environment element, the boundary line element, and vertex data of each vertex on the lane line element.
Wherein, according to the road scene example described above, the bar elements may be boundary line elements and lane line elements. And binding the materials of the vertex data of each vertex on the strip-shaped element, so that the vertex data of each vertex on the strip-shaped element is bound with a corresponding material mark, wherein the material can be understood as the configuration parameters when rendering each vertex of the strip-shaped element, and the configuration parameters are not limited to the configuration information including color values, rendering logic and the like.
Wherein, a vertex is a unit or element that can be the smallest granularity constituting one element, while an element can be composed of a plurality of primitives, and a vertex is a unit or element that can be the smallest granularity constituting one primitive. For example, a triangle may be a primitive that is the smallest unit of any one element, three vertices make up a triangle (primitive), and multiple triangle primitives make up an element, so each element in the target scene is made up of a series of vertices.
It should be noted that, each vertex on any element in the scene has corresponding vertex data, and each vertex data is not limited to information including a position (such as world coordinates), a normal line, a color, a two-dimensional texture coordinate (uv), and the like of the vertex, and may further include other information used for computing by the graphics processing unit (Graphic Processing Unit, GPU).
202. And rendering vertex data which are not marked by materials in the scene data to obtain a second color rendering graph, and generating a target depth graph of other elements except the strip-shaped elements according to the vertex data which are not marked by materials in the scene data.
The second color rendering diagram contains other elements except strip-shaped elements in the target scene, and the other elements refer to any other elements except strip-shaped elements in the target scene. For example, taking a road scene as an example, the road scene includes a road element, environmental elements (such as plants, traffic signals signs, street lamps, etc.) located at two sides of the road, boundary line elements located on the road, and lane line elements, vertex data which are not marked by materials includes the road element and vertex data of each vertex on the environmental element, and rendering processing can be performed according to the vertex data of each vertex on the road element and the environmental element, so as to obtain a second color rendering diagram.
The depth value of each position point of other elements in the target scene is included in the target depth map, which may reflect the texture of the depths of each point of the other elements in the scene, and may be a texture map. The target depth map stores depth values of pixel position points representing other elements of the target scene in a screen space. For example, taking a road scene as an example, the other elements mentioned above are road elements in the road scene and environment elements located at two sides of the road, and the target depth map may be generated according to vertex data of each vertex on the road elements and the environment elements.
203. And performing geometric processing on vertex data of vertices on the strip-shaped element marked by the material to obtain screen coordinates of each vertex on the strip-shaped element.
The geometric process may be conversion of vertex coordinates under different spaces, such as conversion of world coordinates of vertices under projection level, clipping level, screen space, and the like. Specifically, during the rendering process, world coordinates of vertices are converted to clipping coordinates by a vertex shader, and clipping coordinates of each vertex are converted to screen coordinates in screen space by a geometry shader.
The screen coordinates may be coordinates of a vertex on any element in the target scene in the screen space. For example, in the screen space, any one of the lower left corner, upper right corner, lower left corner, the screen space center point, or the screen space may be taken as the origin of the screen coordinate system, and the world coordinates of each vertex on any element in the target scene are represented on the screen coordinate system when they are converted into the screen space.
Specifically, vertex data of vertices on the strip-shaped elements in the target scene are screened according to the mode that whether material marks exist or not, world coordinates of corresponding vertices can be determined according to the vertex data of the vertices on the strip-shaped elements, any three adjacent or similar vertices are determined to be used as a vertex combination according to the world coordinates of each vertex in the strip-shaped elements, other vertices at different adjacent positions can be repeatedly used as a new vertex combination, three vertices in each vertex combination can be respectively used as three vertices of a triangle, and therefore a triangle is formed and one triangle is used as a target primitive. Further, among several processes, for each vertex data, it may be subjected to a series of processes by a vertex shader, for example, the vertex shader may convert world coordinates of each vertex into clipping coordinates. Finally, in several processes, (vertex data of) three vertices in the vertex combination corresponding to each target primitive are used as input of a geometry shader, so that the geometry shader can acquire target primitive information, namely triangle information, which can contain data of the three vertices corresponding to the vertex combination, the geometry shader can convert the three vertices in the vertex combination into a screen space so as to realize conversion of clipping coordinates of each vertex in the vertex combination into screen coordinates in the screen space, and the screen coordinates are loaded into vertex data of each vertex, and therefore, for each vertex in the screen space, the position of the vertex in the screen space can be determined according to the vertex data of each vertex, and the target primitive to which each vertex belongs and other vertices which jointly form one target primitive can be determined according to the vertex data of each vertex.
204. And performing fragment matching in a screen space according to the screen coordinates of each vertex on the strip-shaped element, and determining fragment information of each target fragment representing the strip-shaped element.
In this embodiment of the present application, after each vertex on the strip element is converted to the screen space, the target primitive to which each vertex belongs may be rasterized in the screen space to configure a plurality of target primitives having intersections with each target primitive, and after the configuration of the target primitives, each target primitive may carry vertex information of the target primitive having intersections with the target primitive, so as to indicate that the target primitive is responsible for performing subsequent coloring on the target primitive to which the target primitive belongs.
The target primitive refers to a pixel that intersects with the target primitive in the screen space, which can be understood as a candidate of a pixel at a position in the screen space that intersects with the target primitive, that is, the state of the pixel is ultimately determined by the target primitive.
The fragment information indicates a target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertexes on the strip-shaped elements. Specifically, the primitive information of each target primitive includes vertex combination information of the target primitive to which the target primitive belongs, where the vertex combination information may specifically be vertex information of each vertex in the vertex combination, for example, taking a triangle as the target primitive, each target primitive includes vertex information of three vertices of the triangle to which the target primitive belongs, and the vertex information is not limited to including coordinates, positions, colors, normals, depth values, and the like.
Specifically, according to the screen coordinates of each vertex on each strip element, determining the target position of each vertex in the screen, and according to the target primitive formed by each vertex, a plurality of target pixels with intersections exist in the screen space, and after determining that the target pixel with an intersection exists, the target pixel can be understood as the target pixel before coloring, so that the allocation of the target primitive with each vertex on the strip element in the screen space is completed. Further, in order to ensure that each target pixel is accurately responsible for the coloring work of the target pixel to which the target pixel belongs, vertex information of the target pixel to which the target pixel belongs needs to be determined and recorded in each pixel, specifically, the position information of each target pixel is taken as the position of the corresponding target pixel, and a plurality of target pixels intersected by each target pixel are determined based on the position of each target pixel in a screen space; and aiming at a plurality of target primitives intersected by each target primitive, taking the vertex information of the vertexes forming the target primitive as the primitive information of each target primitive, and writing or binding the primitive information into each target primitive so that the target primitive carries the vertex information of the vertex combination of the target primitive.
205. And carrying out depth detection on each target fragment according to the target depth map to obtain a depth detection result of each target fragment, and determining the depth detection result as a target fragment passing the depth detection.
The depth detection refers to comparing the depth of the target primitive with the depth value of the same pixel position point in the target depth map to determine the pixel to be processed for coloring the pixel position. It should be noted that, any element in the target scene belongs to an element in the three-dimensional space, and the rendering process is to convert any element in the target scene that belongs to the three-dimensional space into a two-dimensional image, so as to generate an image and store the image in the frame buffer, where each position in the frame buffer stores a color value. In converting any element belonging to the three-dimensional space in the target scene into the two-dimensional image, since different elements may overlap in the same position point, each position point of the two-dimensional image may be mapped multiple times, that is, one position point may be covered by multiple fragments, and then for each position point, it is possible to determine which fragment the position point is colored by through depth detection. Therefore, the depth detection can be performed on each target fragment based on the target depth map, so that target fragments passing through the depth detection, namely fragments to be processed, can be screened out according to the depth detection result of each target fragment.
Specifically, for each target primitive having an intersection with a target primitive of a strip element, a depth value of a pixel position point identical to the target primitive may be obtained from a target depth map, so as to serve as a depth contrast value of the target primitive; further, a target depth value of each target primitive is obtained, specifically, a target primitive to which each target primitive belongs may be determined based on primitive information of each target primitive, a depth value of each vertex is determined based on vertex information of each vertex in a vertex combination of the target primitive, and a target depth value of each target primitive may be determined according to the depth values of each vertex, and generally, in the strip-shaped elements in the embodiment of the present application, the depth of each vertex constituting each target primitive is consistent, and then the depth value of any vertex in the target primitive to which each target primitive belongs may be used as the target depth value of the target primitive; and finally, comparing the target depth value of each target fragment with the corresponding depth comparison value, and determining that the depth detection result of the target fragment is the depth detection passing value when the target depth value of the target fragment is larger than the corresponding depth comparison value, otherwise, determining that the depth detection result of the target fragment is the depth detection failing value.
It should be noted that, when the depth detection is performed on the target primitives according to the depth map, each target primitive can generally pass through the depth detection and serve as a primitive to be processed, and another target of the depth detection may be writing the depth value of the target primitive into a corresponding position point in the depth buffer. Specifically, a Graphics Processing Unit (GPU) includes a frame buffer and a depth buffer corresponding to the frame buffer, where positions in the depth buffer correspond to positions of the frame buffer one by one, color values of the primitives are stored in the frame buffer, depth values of the primitives are stored in the depth buffer, depth detection is performed on each target primitive associated with a target primitive of a strip element, and depth values of the target primitives passing through the depth detection are written into corresponding position points in the depth buffer, so as to avoid shielding of the strip element by other elements in a scene image.
206. And determining the weight of each target primitive according to the overlapping proportion between each target primitive passing the depth detection and the target primitive to which the target primitive belongs.
The overlapping proportion may be an area overlapping ratio between each target primitive and the target primitive, and specifically, the overlapping proportion may be a ratio between an overlapping area when the target primitive intersects with the target primitive and a total area of the target primitive. It should be noted that, the overlapping proportion associated with each target primitive may be used as the weight value of the target primitive to represent the transparency of the target primitive.
Specifically, in order to improve the accuracy of the weight value of each target primitive, for each target primitive passing the depth detection, the target primitive may be divided into an area array including a plurality of primitive sub-areas, the total number of the primitive sub-areas included in the area array is counted, the number of target primitive sub-areas having intersections with the target primitive in the area array is determined, the overlapping proportion between the corresponding target primitive and the target primitive is determined according to the ratio between the number of the target primitive sub-areas and the total number of the primitive sub-areas, and finally, the weight of each target primitive is determined according to the overlapping proportion between each target primitive and the target primitive.
For example, as shown in connection with fig. 6, for each target primitive, the target primitive may be divided into an area array of 4*4, where the area array includes 16 primitive sub-areas, taking one target primitive in the middle of the graph as an example, the number of primitive sub-areas intersecting with the target primitive in the area array is counted, and in fig. 6, the number of target primitives to which the middle target primitive belongs is two, where the two target primitives (triangles) form a strip-shaped element segment, and the strip-shaped element segment occupies 5 primitive sub-areas on the lower right side in the middle area array, where the overlapping ratio is 5/16, and the weight of the middle target primitive is 5/16.
207. And coloring the fragments according to the fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering graph.
In the embodiment of the present application, after determining the weight of each target primitive, the target primitive may be colored according to the color value and the weight of each target primitive, so as to perform an independent coloring process on the strip-shaped element in the target scene, so as to obtain a first color rendering diagram for the strip-shaped element in the target scene.
The first color rendering diagram at least comprises strip-shaped elements obtained through rendering processing. Illustratively, taking a road scene as an example, the road scene includes road elements, environment elements (such as plants, traffic signals signs and the like) located on two sides of the road, boundary line elements, lane line elements and the like located on the road; the border line element and the lane line element may be respectively used as a strip element, so after the weight of each target pixel of the intersection of the border line element and the lane line element is obtained, the border line element and the lane line element in the target scene may be colored according to the weight of the target pixel, so as to obtain a first color rendering diagram, where the first color rendering diagram includes the border line element and the lane line element obtained by the rendering process.
Specifically, in the coloring stage of the strip-shaped element, the first color value of each target fragment can be determined according to the fragment information of the target fragment, and the weight of each target fragment is used as the transparency of the target fragment, so that in the coloring stage, fragment coloring is performed according to the first color value and the transparency of each target fragment. In the process of coloring the target fragments, coloring can be performed on each target fragment according to RGBA channels, RGB represents red, green and blue color channels, A (Alpha) represents transparency channels, the Alpha channels and the RGB channels are parallel, coloring of each target fragment is realized according to the color (RGB) channels and the transparency (A) channels based on the first color value and the transparency of each target fragment, so as to obtain a first color rendering diagram aiming at strip-shaped elements, and the strip-shaped elements are normally displayed in the first color rendering diagram. Thereafter, a weight value for each target primitive may also be stored for use in fusing the first color rendering map with the color rendering maps of other elements in the scene.
208. And fusing the first color rendering graph and the second color rendering graph according to the weight corresponding to each target fragment to obtain a target image of the presented target scene.
In the embodiment of the present application, after obtaining the first color rendering map for the strip-shaped element in the target scene, the second color rendering map for the other element in the target scene needs to be obtained, so that the first color rendering map and the second color rendering map are fused according to the weight of each target fragment. Specifically, a first color value corresponding to each target pixel position is obtained from a first color rendering diagram, a second color value passing through each target pixel position is obtained from a second color rendering diagram of other elements in a target scene, and the first color value and the second color value corresponding to each target pixel position are fused according to each target pixel weight, so that the first color value of each target pixel position representing a strip element in the first color rendering diagram is written into the second rendering color diagram to obtain a target image of the whole element of the target scene, the target image completely displays the strip element in the target scene, even if the strip element in a far area does not generate a cut-off problem, the quality of the strip element in the image rendering process is ensured, and the image visual effect corresponding to the target scene is improved.
Taking a twin map image of a road scene as an example, the road scene contains road elements, environment elements (such as plants, traffic signal signs, street lamps and the like) located on two sides of the road, boundary line elements and lane line elements located on the road, a second color rendering diagram is generated for the road scene containing the road elements and the environment elements located on two sides of the road, and is set as a diagram A, and a first color rendering diagram is generated for the boundary line elements and the lane line elements located on the road in the road scene, and is set as a diagram B; then, according to the weight of each target pixel in the first color rendering diagram when coloring, respectively determining a first weighting coefficient of a first color value of each target pixel position in the first color rendering diagram, and expressing the first weighting coefficient as B because the first weighting coefficient influences the transparency of the first color value of the corresponding pixel position Alpha And determining a second weighting factor of a second color value of the corresponding target pixel location according to the first weighting factor, denoted as A Alpha Further, a first color value for each target pixel location is obtained from the map B, defined as B color And obtaining a second color value for each target pixel location from the map A, defined as A color The method comprises the steps of carrying out a first treatment on the surface of the And (3) carrying out Color fusion on the graph A and the graph B, wherein the calculation process of the fusion Color value Color is as follows for each target pixel position: color=a color ×A Alpha +B color ×B Alpha . Finally, each fusion Color value 'Color' is written into a corresponding target pixel position in the graph A, so that a target image presenting each element in the road scene can be obtained. As shown in fig. 2, fig. 3 and fig. 7, for the case where the long and narrow lane line elements and boundary line elements in the road scene are shown in fig. 2, the problem that the line elements shown in fig. 3 are cut off easily occurs in a far area, the visual effect is affected, and after the rendering processing in the embodiment of the application, the lane line elements and the boundary line elements in the road scene are enabled to be far and near from the image capturing device, the display quality can be ensured, and it is required to be stated that, for example, when the lane line elements and the boundary line elements are far from the image capturing device, as shown in fig. 7, the lane line elements and a part of the colors of the lane line elements and the boundary line elements can be fused according to the actual weight values, the corresponding colors can be displayed, the phenomenon that the display effect is cut off caused by complete non-display is effectively avoided, the visual effect of the image is improved, and the reliability is provided.
For the convenience of understanding the embodiments of the present application, the embodiments of the present application will be described with specific application scenario examples. Specifically, the application scenario example is described by executing the above steps 201-208.
The rendering method is applicable to image rendering examples in scenes such as twin maps, intelligent traffic, auxiliary driving, metauniverse and the like. For example, taking a rendering processing scene example of a twin map as an example, the specific details of the scene example are as follows:
1. the rendering scene instance profile is as follows:
the scene example is used for solving the problem of a sub-pixel line, the sub-pixel line can be understood as a truncation phenomenon generated by a line element far away from an observation camera, the sub-pixel line can be applied to a digital twin UE4 visual map engine, the UE4 visual map engine adopts vector data to construct a ground surface foundation, and other elements can be displayed in a fusion and superposition mode on the ground surface foundation.
As shown in fig. 2, 3 and 7, for the lane line elements and the boundary line elements which are long and narrow in the road scene shown in fig. 2, the problem of line element truncation shown in fig. 3 is liable to occur for the lane line elements and the boundary line elements in the farther area, and the visual effect is affected. However, for the lane line elements and the boundary line elements in the dashed line frame in fig. 2, after the rendering processing of the present scene example, the lane line elements and the boundary line elements in the road scene, no matter far or near to the camera device, can ensure the display quality; for example, in conjunction with fig. 7, it is shown that after the rendering process of the example of the scene is performed on the lane line element and the boundary line element in the dashed line box in fig. 2, when the lane line element and the boundary line element are far away from the image capturing device, part of colors of target elements of the lane line element and the boundary line element can be fused according to actual weight values, so as to display corresponding colors, so that the line element far away does not generate a truncation phenomenon, the visual effect of the image is improved, and reliability is provided,
2. As shown in connection with fig. 9, the scenario instance may be implemented by a cloud computing service on the server side or an application degree (client side) on the terminal. Taking the application degree (client) on the terminal as an example, the implementation process of the scene example is specifically as follows:
step S5.1: this step prepares the configuration before rendering the line elements. Specifically, for an element (usually an elongated line element) that needs to solve the sub-pixel rendering problem, each vertex of the line element under the sub-pixel needs to be extracted separately to construct a target primitive (e.g., a triangle primitive), the constructed primitive is bound to a material, and the material is marked to distinguish the line element under the sub-pixel from other elements in the scene. It should be noted that a texture may be understood as a configuration parameter (color, normal) and rendering logic (e.g., how to decolour, and other logic) within an image processing unit (GPU)
Step S5.2: this step is a pervasive phase on the rendering side, including a depth buffer (DepthBuffer) phase generated by a depth channel (DepthPass) and a rendering phase by a base pass. The Depthpass stage collects vertex data of each element, and for the element with the texture mark in S5.1, the collection is skipped, i.e. the element with the texture mark is not rendered in the DepthBuffer. In the Basepass stage, the collection stage is skipped according to the texture mark, i.e. the element with the texture mark is not rendered out, so that the element with the texture mark is independently rendered in S5.3. After the pervasive phase of step S5.2, a target depth map of the entire scene can be obtained, and a rendered color map a of elements other than the line elements marked by the material (i.e., the aforementioned "second color rendering map") can be obtained.
Step S5.3: and an independent rendering stage for line elements marked for the material. The target depth map generated in the S5.2 stage is used as a basis for depth detection in the stage. The specific process of the independent rendering stage is as follows:
(1) The vertex shader normally performs coordinate conversion to convert world coordinates of each vertex on the line element to clipping coordinates.
(2) The purpose of the geometry shader stage is to perform vertex transformations. Specifically, three vertexes in a vertex combination corresponding to a target primitive (triangle) are input into a geometry shader, and the geometry shader only converts the three vertexes into a screen space for collecting triangle information corresponding to each vertex and then carries the three vertexes in vertex data.
(3) The fragment shader stage will turn on the conservative rasterization function. It should be noted that, in the rendering pipeline, the default rasterization operation does not start conservative rasterization, in the default rasterization operation, only the pixel center point of one primitive is used for coloring when the pixel center point of the primitive is inside or on the edge of the triangle (target primitive), and the primitives in other cases (such as the pixel center of the primitive is outside the triangle) are cut and discarded. After the conservative rasterization is started in the fragment shader stage, firstly, as long as any intersection between the fragment and the triangle is included in the target fragment to be shaded; then, when the target fragment can pass the depth test of the target depth map, the fragment shader can be entered; after entering the fragment shader, each target fragment carries three vertex information of the triangle to which the target fragment belongs. Further, in the fragment shader, each target fragment is split into a 8×8 lattice with finer granularity (i.e. the aforementioned area array), the weight value occupied by each point (fragment subarea) is t=1/64, the total weight value s is initially 0, and each point (fragment subarea) in the lattice is used to determine whether it is inside the triangle. Wherein, every time a point is inside the triangle, the total weight value is added with 1, then the weight value between one [0,1] can be counted finally, and the initial value of each color component and transparency of each pixel of the image is 0, when generating the color rendering graph for the line element, the color part of each target element is normally output, and the transparency part is output by using the total weight value s, so as to obtain the color rendering graph B (the first color rendering graph) for the line element.
For example, in connection with fig. 6, a segment of line elements formed by two triangles (target primitives), each grid represents a target primitive, and the center point of the target primitive represents the center of a pixel. When using the open conservative rasterization of the present scene example, 5 pixels will enter the rasterization coloring flow, taking the middle pixel as an example, for simplicity of illustration. In fig. 6, the pixels are divided by using a 4*4 dot matrix, and it can be seen that there are 5 points (target pixel sub-areas) in the dot matrix of the whole target pixel and intersections exist between the line segments, and then the weight value of the target pixel is 5/16. It should be noted that, when the granularity of the division is finer, the statistical weight value is more accurate.
Step S5.4: and in the mixing stage of the image, the rendering color map A generated by Basepass and the rendering color map B of the S5.3 marking element are fused together. Specifically, for each pixel of the rendered color map B, its color value B is read Color And transparency BAlpha, color value A corresponding to the position of rendered color map A Color Fusion, the fusion process is expressed as: color=a Color ×(1-B Alpha )+B Color ×B Alpha And writing the result Color back to the corresponding position in the rendering Color diagram A, thereby obtaining a target image corresponding to the whole scene. After the completion of the S5.4 procedure, it is noted thatElements that may be problematic under the subpixel are fused into the final displayed rendered color map according to their actual duty cycle. It can be seen that when the camera is closer to the element, the element is covered by a plurality of pixels, so that each point in the 8 x 8 lattice in each pixel is in a triangle, and the total weight value is 1, so that the pixels can be correctly superimposed and rendered, and when the camera is far from the element, the weight values obtained by calculation according to actual sampling can be fused with a part of colors, so that the corresponding element colors can be displayed even under the far condition, and the truncation causing effect can not be displayed at all.
By performing the above scene steps S5.1 to S5.4, the following effects can be achieved: the method can be applied to all rendering engines, and through conservative rasterization and statistics of the line element duty ratio in the pixel, the problem that line elements are truncated when the distance between the line elements and the observation camera is far in the original scheme is solved, and the visual effects of the line elements and the whole map are improved.
As can be seen from the above, when the target scene data to be rendered is obtained, the embodiments of the present application may firstly obtain vertex data of each vertex on a strip-shaped element (such as a line element) for which intermittent display abnormality easily occurs after rendering, and perform geometric processing to convert each vertex on the strip-shaped element into a screen space; then, performing fragment configuration in a screen space according to the target primitives to which each vertex belongs so as to determine target fragments intersected with the primitives of the strip-shaped elements; then, determining the weight of each target pixel according to the overlapping proportion between each target pixel and the target pixel, so as to color the target pixel belonging to the strip element according to the weight of each target pixel, thereby obtaining a first color rendering graph and realizing the independent rendering treatment of the strip element which is easy to generate the intermittent display effect in the target scene; and finally, fusing the first color rendering diagram with the color rendering diagrams of other elements in the target scene according to the weight of each target fragment so as to obtain a target image corresponding to the target scene. Therefore, the strip-shaped elements which are easy to generate discontinuous display abnormality in the scene can be separated from other elements in the scene to be independently rendered, wherein the independent rendering combines the weight of each element with intersection, so that the normal display of the strip-shaped elements is ensured, and the fusion of the independently rendered image and the images of other elements is performed, so that the line elements in the finally rendered image can be normally displayed, and the image quality is improved.
In order to better implement the above method, the embodiment of the application also provides a rendering processing device. For example, as shown in fig. 10, the rendering processing apparatus may include a processing unit 401, a matching unit 402, a determining unit 403, a coloring unit 404, and a blending unit 405.
The processing unit 401 is configured to perform geometric processing on vertex data of vertices on the strip-shaped element in the target scene, so as to obtain screen coordinates of each vertex on the strip-shaped element;
a matching unit 402, configured to perform fragment matching in a screen space according to screen coordinates of each vertex on the strip element, determine fragment information of each target fragment representing the strip element, where the fragment information indicates a target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertex on the strip element; the target primitive refers to a pixel which has an intersection with the target primitive in the screen space;
a determining unit 403, configured to determine a weight of each target primitive according to an overlapping proportion between each target primitive and the target primitive to which the target primitive belongs;
a coloring unit 404, configured to perform fragment coloring according to fragment information of each target fragment and a weight of each target fragment, so as to obtain a first color rendering map;
the fusing unit 405 is configured to fuse the first color rendering map and the second color rendering map according to the weight corresponding to each target fragment, so as to obtain a target image of the presented target scene; the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
In some embodiments, the determining unit 403 is further configured to: dividing each target fragment into a region array with row-column arrangement, and determining the total number of fragment subregions in the region array; determining a target primitive sub-region with intersection with the target primitive in the region array; determining the overlapping proportion between each target primitive and the target primitive according to the number and the total number of the target primitive subregions in each region array; and determining the overlapping proportion between each target primitive and the target primitive to be determined as the weight of the corresponding target primitive.
In some embodiments, the fusion unit 405 is further configured to: acquiring a first color value corresponding to each target pixel position in a first color rendering diagram; acquiring a second color value of the target pixel position from the second color rendering diagram; for each target pixel position, determining a first weighting coefficient for a first color value and a second weighting coefficient for a second color value according to the weight of a target pixel to which the target pixel position belongs; for each target pixel position, weighting the first color value and the second color value of the target pixel position according to the first weighting coefficient and the second weighting coefficient to obtain a fusion color value of the target pixel position; and obtaining a target image presenting the target scene according to the fusion color values of the target pixel positions.
In some embodiments, the fusion unit 405 is further configured to: for each target pixel position, taking the weight of the target pixel element to which the target pixel position belongs as a first weighting coefficient of a first color value corresponding to the target pixel position; and determining a second weighting coefficient of a second color value corresponding to the target pixel position according to the weight of the target pixel to which the target pixel position belongs, wherein the sum of the first weighting coefficient of the first color value corresponding to the same target pixel position and the second weighting coefficient of the second color value corresponding to the same target pixel position is 1.
In some embodiments, the matching unit 402 is further configured to: determining a target pixel which has an intersection with a target primitive to which each vertex belongs in a screen space according to the screen coordinates of each vertex on the strip-shaped element; taking the target pixel as a target fragment, and determining fragment information of each target fragment according to the position information of the target pixel and the vertex information of each vertex in the target fragment intersected with the target pixel; the vertex information is obtained by performing geometric processing on vertex data of the vertex.
In some embodiments, the rendering processing apparatus further comprises a detection unit for: obtaining a target depth map, wherein the target depth map is obtained by performing texture rendering according to vertex data of vertexes on other elements except strip-shaped elements in a target scene; performing depth detection on each target fragment according to the target depth map to obtain a depth detection result of each target fragment; determining target fragments passing through the depth detection result as fragments to be processed;
The determining unit 403 is further configured to: and determining the weight of each to-be-processed primitive according to the overlapping proportion between each to-be-processed primitive and the target primitive.
In some embodiments, the detection unit is further configured to: obtaining a depth value of a pixel position where each target fragment is located from a target depth map, and taking the depth value as a depth comparison value of each target fragment; comparing the depth comparison value of each target fragment with the target depth value of the target fragment; if the target depth value of the target fragment is larger than the corresponding depth contrast value, determining that the depth detection result of the target fragment is passing detection; if the target depth value of the target fragment is smaller than the corresponding depth contrast value, determining that the depth detection result of the target fragment is that the detection is not passed.
In some embodiments, the processing unit 401 is further configured to: constructing a plurality of target primitives forming the strip-shaped element according to vertex data of vertexes on the strip-shaped element in the target scene, wherein each target primitive consists of a plurality of vertexes; converting world coordinates of each vertex associated with each initial primitive into clipping coordinates; and converting the clipping coordinates of each vertex in the vertex combination corresponding to each target primitive into a screen space to obtain the screen coordinates of each vertex on the strip-shaped element.
In some embodiments, the coloring unit 404 is further configured to: determining a first color value of each target fragment according to fragment information of each target fragment; determining the transparency of each target fragment according to the weight of each target fragment; and obtaining a first color rendering graph according to the first color value and the transparency of each target fragment.
In some embodiments, the rendering processing apparatus further includes a rendering unit for: acquiring scene data of a target scene, wherein the scene data comprises vertex data of vertexes on all elements in the target scene; marking materials of vertex data of vertexes on the strip-shaped elements in the scene data; and performing rendering processing according to vertex data which is not marked by the materials in the scene data to obtain a second color rendering chart.
As can be seen from the above, when the target scene data to be rendered is obtained, the embodiments of the present application may firstly obtain vertex data of each vertex on a strip-shaped element (such as a line element) for which intermittent display abnormality easily occurs after rendering, and perform geometric processing to convert each vertex on the strip-shaped element into a screen space; then, performing fragment configuration in a screen space according to the target primitives to which each vertex belongs so as to determine target fragments intersected with the primitives of the strip-shaped elements; then, determining the weight of each target pixel according to the overlapping proportion between each target pixel and the target pixel, so as to color the target pixel belonging to the strip element according to the weight of each target pixel, thereby obtaining a first color rendering graph and realizing the independent rendering treatment of the strip element which is easy to generate the intermittent display effect in the target scene; and finally, fusing the first color rendering diagram with the color rendering diagrams of other elements in the target scene according to the weight of each target fragment so as to obtain a target image corresponding to the target scene. Therefore, the strip-shaped elements which are easy to generate discontinuous display abnormality in the scene can be separated from other elements in the scene to be independently rendered, wherein the independent rendering combines the weight of each element with intersection, so that the normal display of the strip-shaped elements is ensured, and the fusion of the independently rendered image and the images of other elements is performed, so that the line elements in the finally rendered image can be normally displayed, and the image quality is improved.
The embodiment of the application further provides a computer device, as shown in fig. 11, which shows a schematic structural diagram of the computer device according to the embodiment of the application, specifically:
the computer device may include one or more processing cores 'processors 501, one or more computer-readable storage media's memory 502, a power supply 503, and an input unit 504, among other components. Those skilled in the art will appreciate that the computer device structure shown in FIG. 11 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components. Wherein:
the processor 501 is the control center of the computer device, and uses various interfaces and lines to connect the various parts of the overall computer device, perform various functions of the computer device and process data by running or executing software programs and/or modules stored in the memory 502, and invoking data stored in the memory 502. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and rendering processes by executing the software programs and modules stored in the memory 502. The memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the computer device, etc. In addition, memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 502 may also include a memory controller to provide access to the memory 502 by the processor 501.
The computer device further includes a power supply 503 for powering the various components, and preferably the power supply 503 may be logically coupled to the processor 501 via a power management system such that functions such as charge, discharge, and power consumption management are performed by the power management system. The power supply 503 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 504, which input unit 504 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit or the like, which is not described herein. In particular, in the embodiment of the present application, the processor 501 in the computer device loads executable files corresponding to the processes of one or more application programs into the memory 502 according to the following instructions, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions as follows:
marking answer areas, question stem areas of questions to which the answer areas belong and test question areas corresponding to the questions to which the questions belong in the test question images to obtain a plurality of marking frames, wherein the types of the marking frames comprise answer frames corresponding to the answer areas, question stem frames corresponding to the question stem areas and test question frames corresponding to the test question areas; determining the subordinate relations between the answer frames and the test question frames, the subordinate relations between the question stem frames and the test question frames and the subordinate relations between different test question frames according to the position information of each mark frame in the test question image; based on the subordinate relation and the position information of each marking frame, ordering a plurality of marking frames belonging to the same hierarchy to obtain a marking frame ordering relation; and combining answer information associated with each answer area, the content of each question stem and the content of each question in the test question image according to the subordinate relation and the mark frame ordering relation to generate text mark data for the test question image.
The specific implementation of each operation may be referred to the previous embodiments, and will not be described herein.
When the target scene data to be rendered is obtained, firstly, aiming at a strip-shaped element (such as a line element) which is easy to generate discontinuous display abnormality after rendering, obtaining vertex data of each vertex on the strip-shaped element, and performing geometric processing to convert each vertex on the strip-shaped element into a screen space; then, performing fragment configuration in a screen space according to the target primitives to which each vertex belongs so as to determine target fragments intersected with the primitives of the strip-shaped elements; then, determining the weight of each target pixel according to the overlapping proportion between each target pixel and the target pixel, so as to color the target pixel belonging to the strip element according to the weight of each target pixel, thereby obtaining a first color rendering graph and realizing the independent rendering treatment of the strip element which is easy to generate the intermittent display effect in the target scene; and finally, fusing the first color rendering diagram with the color rendering diagrams of other elements in the target scene according to the weight of each target fragment so as to obtain a target image corresponding to the target scene. Therefore, the strip-shaped elements which are easy to generate discontinuous display abnormality in the scene can be separated from other elements in the scene to be independently rendered, wherein the independent rendering combines the weight of each element with intersection, so that the normal display of the strip-shaped elements is ensured, and the fusion of the independently rendered image and the images of other elements is performed, so that the line elements in the finally rendered image can be normally displayed, and the image quality is improved.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform steps in any of the rendering processing methods provided by embodiments of the present application. For example, the instructions may perform the steps of:
marking answer areas, question stem areas of questions to which the answer areas belong and test question areas corresponding to the questions to which the questions belong in the test question images to obtain a plurality of marking frames, wherein the types of the marking frames comprise answer frames corresponding to the answer areas, question stem frames corresponding to the question stem areas and test question frames corresponding to the test question areas; determining the subordinate relations between the answer frames and the test question frames, the subordinate relations between the question stem frames and the test question frames and the subordinate relations between different test question frames according to the position information of each mark frame in the test question image; based on the subordinate relation and the position information of each marking frame, ordering a plurality of marking frames belonging to the same hierarchy to obtain a marking frame ordering relation; and combining answer information associated with each answer area, the content of each question stem and the content of each question in the test question image according to the subordinate relation and the mark frame ordering relation to generate text mark data for the test question image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the instructions stored in the computer readable storage medium may execute the steps in any one of the rendering methods provided in the embodiments of the present application, the beneficial effects that any one of the rendering methods provided in the embodiments of the present application may achieve are detailed in the previous embodiments and are not described herein.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations provided in the above embodiments.
The foregoing has described in detail the methods, apparatuses, devices and computer readable storage medium for rendering processing provided by the embodiments of the present application, and specific examples have been applied herein to illustrate the principles and implementations of the present application, and the description of the foregoing embodiments is only for aiding in the understanding of the methods and core ideas of the present application; meanwhile, as those skilled in the art will vary in the specific embodiments and application scope according to the ideas of the present application, the contents of the present specification should not be construed as limiting the present application in summary.

Claims (14)

1. A rendering processing method, comprising:
performing geometric processing on vertex data of vertexes on a strip-shaped element in a target scene to obtain screen coordinates of all vertexes on the strip-shaped element;
performing fragment matching in a screen space according to screen coordinates of each vertex on the strip-shaped element, and determining fragment information of each target fragment representing the strip-shaped element, wherein the fragment information indicates a target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertex on the strip-shaped element; the target primitive refers to a pixel which has an intersection with the target primitive in a screen space;
determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive;
performing fragment coloring according to fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering graph;
according to the weight corresponding to each target fragment, fusing the first color rendering graph and the second color rendering graph to obtain a target image showing a target scene; and the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
2. The method of claim 1, wherein determining the weight of each target primitive based on the overlapping proportion of each target primitive with respect to the target primitive comprises:
dividing each target fragment into an area array with row-column arrangement, and determining the total number of fragment subregions in the area array;
determining a target primitive sub-region with intersection with the target primitive in the region array;
determining the overlapping proportion between each target primitive and the target primitive according to the number of target primitive subregions in each region array and the total number;
and determining the overlapping proportion between each target primitive and the target primitive to be determined as the weight of the corresponding target primitive.
3. The method according to claim 1 or 2, wherein the fusing the first color rendering map and the second color rendering map according to the weight corresponding to each target fragment to obtain the target image of the target scene comprises:
acquiring a first color value corresponding to each target pixel position in the first color rendering diagram;
acquiring a second color value of the target pixel position from the second color rendering diagram;
For each target pixel position, determining a first weighting coefficient for the first color value and a second weighting coefficient for the second color value according to the weight of the target pixel to which the target pixel position belongs;
for each target pixel position, weighting the first color value and the second color value of the target pixel position according to the first weighting coefficient and the second weighting coefficient to obtain a fusion color value of the target pixel position;
and obtaining a target image presenting the target scene according to the fusion color values of the target pixel positions.
4. A method according to claim 3, wherein for each target pixel location, determining a first weighting factor for the first color value and a second weighting factor for the second color value based on the weight of the target pixel to which the target pixel location belongs comprises:
for each target pixel position, taking the weight of the target pixel element to which the target pixel position belongs as a first weighting coefficient of a first color value corresponding to the target pixel position;
and determining a second weighting coefficient of a second color value corresponding to the target pixel position according to the weight of the target pixel to which the target pixel position belongs, wherein the sum of the first weighting coefficient of the first color value corresponding to the same target pixel position and the second weighting coefficient of the second color value corresponding to the same target pixel position is 1.
5. The method according to claim 1, wherein the determining the piece information representing each target piece of the strip element by performing piece matching in a screen space according to the screen coordinates of each vertex on the strip element comprises:
determining a target pixel with an intersection with a target primitive to which each vertex belongs in the screen space according to the screen coordinates of each vertex on the strip-shaped element;
taking the target pixel as a target fragment, and determining fragment information of each target fragment according to the position information of the target pixel and the vertex information of each vertex in the target fragment intersected with the target pixel; the vertex information is obtained by performing geometric processing on vertex data of the vertex.
6. A method according to claim 1 or 2, wherein before determining the weight of each target primitive according to the overlap ratio between each target primitive and the target primitive, the method comprises:
obtaining a target depth map, wherein the target depth map is obtained by performing texture rendering according to vertex data of vertexes on other elements except the strip-shaped elements in the target scene;
Performing depth detection on each target fragment according to the target depth map to obtain a depth detection result of each target fragment;
determining the target fragment passing the detection result as a fragment to be processed;
determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive comprises:
and determining the weight of each to-be-processed primitive according to the overlapping proportion between each to-be-processed primitive and the target primitive.
7. The method of claim 6, wherein the performing depth detection on each target tile according to the target depth map to obtain a depth detection result of each target tile comprises:
obtaining a depth value of a pixel position where each target fragment is located from the target depth map, and taking the depth value as a depth comparison value of each target fragment;
comparing the depth comparison value of each target fragment with the target depth value of the target fragment;
if the target depth value of the target fragment is larger than the corresponding depth contrast value, determining that the depth detection result of the target fragment is passing detection;
and if the target depth value of the target fragment is smaller than the corresponding depth contrast value, determining that the depth detection result of the target fragment is that the detection is not passed.
8. The method according to claim 1, wherein geometrically processing vertex data of vertices on a strip-like element in the target scene to obtain screen coordinates of each vertex on the strip-like element comprises:
constructing a plurality of target primitives forming the strip-shaped element according to vertex data of vertexes on the strip-shaped element in the target scene, wherein each target primitive consists of a plurality of vertexes;
converting world coordinates of each vertex associated with each initial primitive into clipping coordinates;
and converting the clipping coordinates of each vertex in the vertex combination corresponding to each target primitive into a screen space to obtain the screen coordinates of each vertex on the strip-shaped element.
9. The method according to claim 1, wherein performing fragment coloring according to fragment information of each target fragment and weight of each target fragment to obtain a first color rendering map includes:
determining a first color value of each target fragment according to fragment information of each target fragment;
determining the transparency of each target fragment according to the weight of each target fragment;
and obtaining a first color rendering graph according to the first color value and the transparency of each target fragment.
10. The method according to claim 1, wherein the method further comprises:
acquiring scene data of the target scene, wherein the scene data comprises vertex data of vertexes on all elements in the target scene;
marking materials of vertex data of vertexes on the strip-shaped elements in the scene data;
and rendering according to vertex data which is not marked by the materials in the scene data, so as to obtain a second color rendering chart.
11. A rendering processing apparatus, comprising:
the processing unit is used for performing geometric processing on vertex data of vertexes on the strip-shaped elements in the target scene to obtain screen coordinates of all vertexes on the strip-shaped elements;
the matching unit is used for carrying out fragment matching in a screen space according to the screen coordinates of each vertex on the strip-shaped element, determining fragment information of each target fragment representing the strip-shaped element, wherein the fragment information indicates the target primitive to which the target fragment belongs, and the target primitive is constructed according to the vertex on the strip-shaped element; the target primitive refers to a pixel which has an intersection with the target primitive in a screen space;
the determining unit is used for determining the weight of each target primitive according to the overlapping proportion between each target primitive and the target primitive to which the target primitive belongs;
The coloring unit is used for coloring the fragments according to the fragment information of each target fragment and the weight of each target fragment to obtain a first color rendering diagram;
the fusion unit is used for fusing the first color rendering graph and the second color rendering graph according to the weight corresponding to each target fragment to obtain a target image of the presented target scene; and the second color rendering diagram is obtained by rendering according to vertex data of vertices on other elements except the strip-shaped elements in the target scene.
12. A computer device comprising a processor and a memory, the memory storing a computer program, the processor being configured to execute the computer program in the memory to perform the steps of the rendering processing method of any one of claims 1 to 10.
13. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the rendering processing method of any one of claims 1 to 10.
14. A computer program product comprising computer instructions which, when executed, implement the steps in the rendering processing method of any one of claims 1 to 10.
CN202311433604.0A 2023-10-31 2023-10-31 Rendering processing method, apparatus, device and computer readable storage medium Pending CN117523072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311433604.0A CN117523072A (en) 2023-10-31 2023-10-31 Rendering processing method, apparatus, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311433604.0A CN117523072A (en) 2023-10-31 2023-10-31 Rendering processing method, apparatus, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117523072A true CN117523072A (en) 2024-02-06

Family

ID=89752288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311433604.0A Pending CN117523072A (en) 2023-10-31 2023-10-31 Rendering processing method, apparatus, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117523072A (en)

Similar Documents

Publication Publication Date Title
CN107358649B (en) Processing method and device of terrain file
Vanegas et al. Modelling the appearance and behaviour of urban spaces
CN103946895B (en) The method for embedding in presentation and equipment based on tiling block
CN110990516B (en) Map data processing method, device and server
CN105393282A (en) Efficient composition and rendering of graphical elements
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
US9224233B2 (en) Blending 3D model textures by image projection
US20230033319A1 (en) Method, apparatus and device for processing shadow texture, computer-readable storage medium, and program product
CN113409411A (en) Rendering method and device of graphical interface, electronic equipment and storage medium
EP4058162A1 (en) Programmatically configuring materials
CN111179390B (en) Method and device for efficiently previewing CG (content distribution) assets
CN112883102A (en) Data visualization display method and device, electronic equipment and storage medium
US10657705B2 (en) System and method for rendering shadows for a virtual environment
CN115690286B (en) Three-dimensional terrain generation method, terminal device and computer readable storage medium
WO2023005934A1 (en) Data processing method and system, and electronic device
CN117523072A (en) Rendering processing method, apparatus, device and computer readable storage medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
CN115423953A (en) Water pollutant visualization method and terminal equipment
Döllner Geovisualization and real-time 3D computer graphics
CN104699850B (en) The processing method and processing device of three-dimensional geographic information
CN114419219A (en) Rendering device and method based on three-dimensional virtual scene
CN115727869A (en) Map display method, map display device, computer equipment, medium and program product
CN114791986A (en) Three-dimensional information model processing method and device
CN116824028B (en) Image coloring method, apparatus, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication