CN115790638A - Road rendering method, device, equipment and storage medium - Google Patents

Road rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN115790638A
CN115790638A CN202211552295.4A CN202211552295A CN115790638A CN 115790638 A CN115790638 A CN 115790638A CN 202211552295 A CN202211552295 A CN 202211552295A CN 115790638 A CN115790638 A CN 115790638A
Authority
CN
China
Prior art keywords
road
point
texture
transparency
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211552295.4A
Other languages
Chinese (zh)
Inventor
李凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongsoft Group Dalian Co ltd
Neusoft Corp
Original Assignee
Dongsoft Group Dalian Co ltd
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongsoft Group Dalian Co ltd, Neusoft Corp filed Critical Dongsoft Group Dalian Co ltd
Priority to CN202211552295.4A priority Critical patent/CN115790638A/en
Publication of CN115790638A publication Critical patent/CN115790638A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a road rendering method, a road rendering device, road rendering equipment and a storage medium. The method comprises the following steps: acquiring a road auxiliary point corresponding to each road sign point in a navigation route and texture coordinates of the road auxiliary point; determining a texture rendering value of each road point according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation; and rendering the corresponding virtual navigation road according to the texture rendering value of each road point. According to the method and the device, one-time rendering of various effects in the virtual navigation road can be achieved, multiple road effects do not need to be superposed through multiple times of rendering operation, the complex steps of road rendering are simplified, and the high efficiency and the individuation of road rendering are improved.

Description

Road rendering method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a road rendering method, a road rendering device, road rendering equipment and a storage medium.
Background
With the rapid development of Augmented Reality (AR for short) and Head Up Display (HUD for short), developed AR-HUD products can be used on a vehicle to generate a corresponding virtual navigation road according to a navigation route, and the virtual navigation road is rendered into a virtual screen in front of a driver, so that the driver can concentrate on driving.
At present, in order to realize diversification of virtual navigation roads in an AR-HUD scene, it is generally required that a rendered virtual navigation road can exhibit various road effects such as wide-line pavement, narrow solid lines on two sides, road glow or streamer, and the like. And each road effect presentation needs to independently execute a rendering process once to superimpose various road effects through multiple rendering operations, so that diversified presentations of the virtual navigation road are completed. However, this way of rendering the road is cumbersome and greatly affects the performance of rendering the road.
Disclosure of Invention
The embodiment of the application provides a road rendering method, device, equipment and storage medium, which can realize one-time rendering of multiple effects in a virtual navigation road through a predefined road surface transparency transformation relation, simplify complicated steps of road rendering, and improve the high efficiency and individuation of road rendering.
In a first aspect, an embodiment of the present application provides a road rendering method, where the method includes:
acquiring a road auxiliary point corresponding to each road sign point in a navigation route and texture coordinates of the road auxiliary point;
determining a texture rendering value of each road point according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation;
and rendering the corresponding virtual navigation road according to the texture rendering value of each road point.
In a second aspect, an embodiment of the present application provides a road rendering apparatus, including:
the road point acquisition module is used for acquiring a road auxiliary point corresponding to each road sign point in a navigation route and texture coordinates of the road auxiliary point;
the texture determining module is used for determining a texture rendering value of each road point according to the texture coordinate of the road auxiliary point and a predefined road surface transparency transformation relation;
and the road rendering module is used for rendering the corresponding virtual navigation road according to the texture rendering value of each road point.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor and a memory, the memory being used for storing a computer program, the processor being used for calling and running the computer program stored in the memory to execute the road rendering method provided in the first aspect of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program, the computer program enabling a computer to execute the road rendering method as provided in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product comprising computer programs/instructions which, when executed by a processor, implement a road rendering method as provided in the first aspect of the present application.
The embodiment of the application provides a road rendering method, a device, equipment and a storage medium, for each landmark point in a navigation route, a road auxiliary point corresponding to the landmark point and texture coordinates of the road auxiliary point are obtained. And then, according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation, the texture rendering value of each road surface point can be determined, so that the corresponding virtual navigation road is rendered, one-time rendering of multiple effects in the virtual navigation road is realized, multiple road effects do not need to be superposed through multiple times of rendering operations, the complicated steps of road rendering are simplified, and the high efficiency and the individuation of road rendering are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a road rendering method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a road auxiliary point corresponding to each landmark point according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another road rendering method according to an embodiment of the present disclosure;
fig. 4 is a schematic block diagram of a road rendering apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic block diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to solve the problem that when multiple road effects exist in a virtual navigation road in an AR-HUD scene, each road effect needs to be performed once independently to superpose the multiple road effects, so that the road rendering mode is relatively complicated, a new road rendering scheme is designed in the embodiment of the application. And a unified transformation effect after the fusion of various road effects is represented by predefining a corresponding road surface transparency transformation relation at each moment. And then, determining a texture rendering value of each road point according to the texture coordinate of the road auxiliary point corresponding to each road point in the navigation route and the road surface transparency transformation relation, rendering the corresponding virtual navigation road, realizing one-time rendering of multiple effects in the virtual navigation road, and simplifying complicated steps of road rendering.
Fig. 1 is a flowchart of a road rendering method according to an embodiment of the present disclosure. Referring to fig. 1, the method may include the steps of:
s110, acquiring the road auxiliary points corresponding to each road sign point in the navigation route and texture coordinates of the road auxiliary points.
In an AR-HUD scene, in order to ensure that a vehicle driver focuses on driving, when the vehicle is navigated in the driving process, a corresponding virtual navigation road is generated according to a navigation route, and the virtual navigation road is rendered in a virtual screen in front of the driver, so that the driver can view the virtual navigation road directly in front of the vehicle, the driver does not need to look down at the navigation route, and the driving safety of the vehicle is ensured.
It should be understood that a navigation route is generally composed of a series of waypoints continuously collected with corresponding coordinate information. The landmark point may be each road coordinate point stored in the navigation route. While the virtual navigation road rendered for the user is a complete road surface channel represented by the corresponding lane. That is, the present application requires the navigation route to be expanded to one lane surface.
Therefore, for each landmark point recorded with corresponding coordinate information in the navigation route, a road auxiliary point can be respectively set at corresponding positions on two sides of each landmark point according to the width of the road surface of the virtual navigation road to be rendered, so as to represent the road boundary position of the virtual navigation road. Therefore, the method and the device can acquire the coordinate information of the road auxiliary point corresponding to each road sign point.
For example, the road auxiliary point in the present application may be each vertex obtained by triangularization on the basis of each landmark point.
As an alternative in the present application, for a road auxiliary point corresponding to each road marking point, the present application may be obtained in the following manner: at least one first road assistant point located on the left side of each landmark point and at least one second road assistant point located on the right side of the landmark point are obtained.
That is, for each waypoint in the navigation route, the waypoint may be taken as the road center point. Then, a first road auxiliary point is set at least one interval position on the left side of the landmark point, and the first road auxiliary point represents a left boundary point in the same horizontal direction with the landmark point in the virtual navigation road. And, a second road assistant point is set at least one same interval position on the right side of the landmark point, representing a right boundary point in the virtual navigation road in the same horizontal direction as the landmark point.
At this time, the first road auxiliary points and the second road auxiliary points under each road marking point are in one-to-one correspondence.
For example, as shown in fig. 2, assuming that the navigation route is composed of five landmark points p1, p2, p3, p4, and p5, a first road assistant point may be set at a certain interval position on the left side of each landmark point, and a second road assistant point may be set at the same interval position on the right side of the landmark point, so as to obtain a road assistant point corresponding to each landmark point.
In order to ensure the rendering effect of the virtual navigation road, it is necessary to analyze texture information provided at each position point within the virtual navigation road to express the texture effect of the virtual navigation road. Each landmark point in the navigation route does not carry corresponding texture information.
Therefore, after the road auxiliary point corresponding to each road marking point is obtained, the texture coordinate corresponding to each road auxiliary point can be set by the method and the device, so as to represent the texture information of the road auxiliary point, and facilitate the subsequent analysis of the texture information of each road marking point in the rendered virtual navigation road. The texture coordinate of each road auxiliary point may be a texture coordinate point after road triangulation.
And S120, determining a texture rendering value of each road surface point according to the texture coordinate of the road auxiliary point and the predefined road surface transparency transformation relation.
All the road auxiliary points corresponding to the road marking points can almost represent each boundary vertex of the rendered virtual navigation road, and the texture coordinates of each road auxiliary point can represent the boundary texture information of the rendered virtual navigation road. Therefore, at least three road auxiliary points can form a road surface in the virtual navigation road, and the road auxiliary points are corresponding road surface vertexes. Then, a texture region may be formed based on the texture coordinates of the vertices of the respective road surfaces. Further, by analyzing the coordinates of each point in the texture region, the texture coordinates of any point in the road surface can be determined.
The road points in the present application are each position point in the rendered virtual navigation road, and may include, but are not limited to, a landmark point and a road assistant point. By the method, the texture coordinate of each road point can be judged.
Illustratively, for a road surface formed by at least three road auxiliary points in a virtual navigation road, the application can perform rasterization rendering on the road surface to obtain a plurality of pixel points in the road surface, and each pixel point can be represented as a road point.
Then, it is considered that if the higher the transparency of a certain road surface point is, the brighter the road surface point is, and the lower the transparency of the road surface point is, the darker the road surface point is. Therefore, road effects such as a wide-line paved surface, narrow solid lines on both sides, and the like in the rendered virtual navigation road can be formed by setting different transparencies under different pavement points.
Therefore, at each moment, the corresponding transparency can be set in different texture intervals according to the comprehensive road effect in the rendered virtual navigation road, so that the transparency matched with the road points in the different texture intervals is represented and used as the predefined road transparency transformation relation in the application. The road surface transparency transformation relation can represent a mapping relation between the texture interval and the transparency.
Specifically, according to a comprehensive road effect which a user wants to render at each moment, a plurality of texture sections which are obviously different in road effect and irregular in change can be marked out, for example, a road middle area is usually represented as a common road surface pattern, and areas on two sides of a road are usually represented as green belt patterns, so that the road middle area and the areas on two sides of the road can be respectively divided into different texture sections. At this time, the road effect in each texture section may be the same or have a certain change rule, for example, the color of the road in the middle area of the road appears to be gradually faded from the central point to both sides, and the colors of the roads in both side areas of the road all appear to be the same color similar to the green belt.
Then, according to the road effect that each texture interval is desired to render, a corresponding transparency configuration information can be set for each texture interval, so as to obtain the road surface transparency transformation relationship in the application.
For example, the road surface transparency transformation relationship in the present application can be represented by the following table:
texture interval 1 Transparency configuration information 1
Texture interval 2 Transparency configuration information 2
Texture interval n Transparency configuration information n
Furthermore, according to the texture coordinate of each road surface point, the texture section where the road surface point is located can be judged. And then, determining transparency configuration information corresponding to the texture section where the road surface point is located from a predefined road surface transparency transformation relation. And then, according to transparency configuration information corresponding to the texture interval where the road surface point is located, performing corresponding transparency processing on the initial texture information of the road surface point, so as to determine an actual texture coordinate value of the road surface point, and taking the actual texture coordinate value as a texture rendering value of the road surface point.
In the same manner as described above, the texture rendering value of each road point can be calculated.
And S130, rendering the corresponding virtual navigation road according to the texture rendering value of each road point.
According to the texture rendering value of each road point, a corresponding texture can be rendered for each road point, so that a corresponding virtual navigation road is rendered, and the virtual navigation route can have the overall effect after the multiple road effects are integrated.
According to the technical scheme provided by the embodiment of the application, for each road sign point in the navigation route, the road auxiliary point corresponding to the road sign point and the texture coordinate of the road auxiliary point are obtained. And then, according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation, the texture rendering value of each road surface point can be determined, so that the corresponding virtual navigation road is rendered, one-time rendering of multiple effects in the virtual navigation road is realized, multiple road effects do not need to be superposed through multiple times of rendering operations, the complicated steps of road rendering are simplified, and the high efficiency and the individuation of road rendering are improved.
As an optional implementation scheme in the present application, the present application may describe in detail a specific determination process of a texture rendering value of each road point.
Fig. 3 is a flowchart illustrating another road rendering method according to an embodiment of the present application, and as shown in fig. 3, the method may include the following steps:
s310, acquiring the road auxiliary point corresponding to each road sign point in the navigation route and the texture coordinate of the road auxiliary point.
And S320, interpolating according to the texture coordinates of the road auxiliary points to obtain corresponding road surface points, and determining the texture coordinates of each road surface point.
After the road auxiliary points corresponding to each landmark point are determined, because each road auxiliary point can represent the boundary vertex in the rendered virtual navigation road, other road points are all positioned among the road auxiliary points. Therefore, according to the texture coordinates of every two road auxiliary points, a plurality of road points can be interpolated between the two road auxiliary points, and the texture coordinates of each road point after interpolation can be determined.
For example, if texture coordinates of two road auxiliary points on both sides of a certain landmark point are (-1,0) and (1,0), respectively, then a plurality of road points (-0.9,0), (-0.8,0), …, (0.8,0) and (0.9,0) can be interpolated.
And S330, determining the texture transparency of each road surface point according to the texture coordinate of each road surface point based on the predefined road surface transparency transformation relation.
The road surface transparency transformation relation in the application can represent the mapping relation between different texture intervals and each transparent value. Therefore, the texture section where each road surface point is located can be judged according to the texture coordinate of each road surface point. Then, the transparency mapped in the texture section set in the road surface transparency conversion relation is used as the texture transparency of the road surface point.
In the same manner as described above, the texture transparency of each road point can be determined.
As an alternative implementation scheme in the present application, it is considered that the virtual navigation road may have different road effects in the longitudinal and transverse directions, for example, the left and right sides of the road are wide lines, and the middle section of the road becomes dark continuously towards the front and back sides. Therefore, the road surface transparency transformation relation in the present application may include both a transverse transparency transformation relation and a longitudinal transparency transformation relation.
The horizontal transparency transformation relation may represent transparency transformation rules matched in different horizontal texture intervals, and the vertical transparency transformation relation may represent transparency transformation rules matched in different vertical texture intervals.
That is to say, according to the lateral texture change effect and the longitudinal texture change effect in the comprehensive road effect that the user wants to render at each moment, a plurality of lateral texture intervals with obvious differences in the lateral texture change effect and irregular changes and a plurality of longitudinal texture intervals with obvious differences in the longitudinal texture change effect and irregular changes can be marked out first. Then, corresponding transverse transparency configuration information is set for each transverse texture interval, and corresponding longitudinal transparency configuration information is set for each longitudinal texture interval, so that the transverse transparency transformation relation and the longitudinal transparency transformation relation in the application can be obtained.
Furthermore, for the texture transparency of each road point, the following steps may be employed to determine:
the method comprises the following steps of firstly, respectively determining the transverse texture transparency and the longitudinal texture transparency of each road surface point according to the texture coordinate of each road surface point based on the predefined transverse transparency transformation relation and longitudinal transparency transformation relation.
After the texture coordinate of each road surface point is obtained, the transverse texture interval where the road surface point is located can be determined according to the transverse texture coordinate value of the road surface point. And determining the longitudinal texture interval where the road surface point is located according to the longitudinal texture coordinate value of the road surface point.
Then, from the transverse transparency transformation relation, a transparency transformation rule matched with the transverse texture interval where the road surface point is located can be found out, and accordingly the transverse texture transparency of the road surface point is determined. Moreover, from the longitudinal transparency transformation relation, the transparency transformation rule matched with the longitudinal texture interval where the road surface point is located can be found out, and accordingly, the longitudinal texture transparency of the road surface point is determined.
In the same manner as described above, the lateral texture transparency and the longitudinal texture transparency of each road point can be determined.
And secondly, calculating the product of the transverse texture transparency, the longitudinal texture transparency and the predefined road transparency of each road point as the texture transparency of the road point.
After the transparency of the transverse texture and the transparency of the longitudinal texture of each road point are obtained, an overall effect may exist in consideration of different rendering effects of the virtual navigation road at each road point. The original overall effect of all pavement points in the virtual navigation road is expressed through the road transparency. For example, when the virtual navigation road is just rendered, a fade-in effect may be set, so when the virtual navigation road is just rendered, the road transparency may be set to be continuously increased from zero according to the rendering time, for example, at the first rendering time, the road transparency may be 0.1, at the second rendering time, the road transparency may be 0.3, and at the third rendering time, the road transparency may be 0.5, and the operations are sequentially changed until the road transparency is increased to 1.
Further, after obtaining the lateral texture transparency and the longitudinal texture transparency of each road point, a product between the lateral texture transparency, the longitudinal texture transparency, and a road transparency predefined at the current rendering time of each road point may be calculated as the texture transparency of the road point.
S340, determining the special effect type of the road auxiliary special effect under the navigation route.
In the application, for a conventional road preliminarily rendered according to the predefined road surface transparency transformation relation, some road auxiliary special effects, such as a glow special effect or a streamer special effect, can be added to the conventional road, so as to enhance the diversity of road rendering. Different road auxiliary special effects have different rendering requirements for different road points. Therefore, after the texture transparency of each road point is preliminarily determined, whether a road auxiliary special effect for further enhancing the road rendering effect exists under the navigation route is also judged. If the corresponding road auxiliary special effect exists in the navigation route, the special effect type of the road auxiliary special effect is determined first, so that different special effect rendering parameters can be determined according to different special effect types.
And S350, updating the texture transparency of each road point according to the special effect rendering parameters corresponding to the special effect types.
According to the rendering requirements of different road auxiliary special effects on different road surface points, the special effect types of the different road auxiliary special effects can be used as key names in a key value pair mode, and various parameters which accord with the specific rendering effect of the road auxiliary special effect represented by the key names on each road surface point are preset under each key name, so that special effect rendering parameters corresponding to the different special effect types are obtained.
After the special effect type of the road auxiliary special effect under the navigation route is determined, the corresponding special effect rendering parameter under the special effect type can be found out. Then, the texture transparency of the road surface point is updated by using the specific parameters set for each road surface point in the special effect rendering parameters corresponding to the special effect type, so that the road surface point can further have the rendering style of the road auxiliary special effect.
As an alternative implementation in the present application, the road assistance effect may include both a glow effect and a streamer effect. Next, a description is given of a specific updating process of the texture transparency of each road point under the glow effect and the streamer effect, respectively:
1) If the special effect type of the road auxiliary special effect is the glow special effect, determining the glow transparency of each road point according to the predefined glow radius, the transverse expansion ratio and the preset glow transparency; and updating the texture transparency of each road point according to the glow transparency of the road point.
The glow effect in the rendered virtual navigation road may be a breathing effect that dynamically changes from each center point to both sides in the road lateral direction. Aiming at the glow effect, a glow radius can be predefined in the transverse direction, the glow transparency of each road point within the glow radius is the maximum, and the change is avoided, and the application can predefine a glow preset transparency to represent the maximum glow transparency within the glow radius. Moreover, in order to show the lateral respiration effect of the glow, different lateral expansion ratios can be set at different rendering moments, and the lateral expansion ratio can show the ratio of the lateral texture interval which is expanded on the glow radius and can control the gradual darkening of the glow.
From the glow radius, a glow-fixed first lateral texture interval can be determined. And according to the glow radius and the transverse expansion ratio, determining a second transverse texture interval in which the glow transparency is gradually changed from the maximum glow preset transparency within the glow radius to 0. Further, it may be determined that the glow transparency matched by a third lateral texture section remaining in addition to the first lateral texture section and the second lateral texture section is 0.
Therefore, according to the texture coordinates of each road surface point, it can be determined which specific one of the first lateral texture section, the second lateral texture section and the third lateral texture section the road surface point is located in. Then, according to the specific glow transparency condition in the interval, determining the glow transparency of the pavement point.
Illustratively, assuming a glow radius of 0.3 at a certain rendering time, a lateral expansion ratio of 0.2, and a preset transparency of 0.5 for the glow. Then, according to the glow radius and the transverse expansion ratio, the expanded transition radius can be determined to be 0.5, and the virtual navigation road can be divided into five transverse texture sections of (-1, -0.5), (-0.5, -0.3), (-0.3,0.3), (0.3,0.5) and (0.5,1) from the transverse direction. Wherein (-0.3,0.3) belongs to the glow radius corresponding glow fixed first transverse texture zone, and the matched glow transparency is the glow preset transparency 0.5. (-0.5, -0.3) and (0.3,0.5) belong to the second transverse texture interval of the expanded glow gradient, and the matched glow transparency gradually changes from glow preset transparency 0.5 to 0 on both sides. (-1, -0.5) and (0.5,1) belong to the third transverse texture interval, and the matched glow transparency is 0.
Therefore, according to the transverse coordinate value in the texture coordinate of each pavement point, the pavement point is judged to be in which of the three transverse texture intervals, and the glow transparency of the pavement point is determined according to the glow transparency condition matched with the interval.
After determining the glow transparency of each pavement point, the glow transparency of each pavement point and the determined texture transparency of the pavement point can be added to comprehensively analyze the texture effect of the pavement point, so as to obtain the updated texture transparency of the pavement point.
2) If the special effect type of the road auxiliary special effect is a glow special effect, determining the ratio of the streamer transparency conversion of each road point according to the ratio of the predefined streamer radius to the predefined streamer longitudinal position; and updating the texture transparency of each road point according to the ratio of the streamer transparency of each road point.
The streamer effect in the rendered virtual navigation road is to highlight the dynamic motion of a rendered transverse stream ray in the longitudinal direction of the road. For the streamer effect, a transverse streamer line can be used for representation. Furthermore, a streamer radius may be predefined in the longitudinal direction to represent the longitudinal width of the transverse streamer lines. And a streamer longitudinal position ratio is set for the transverse streamer line in the longitudinal direction, and the streamer longitudinal position ratio represents the longitudinal relative position of the streamer effect represented by the transverse streamer line at the current rendering time.
Then, the streamer longitudinal position occupies the maximum additional streamer transparency at the indicated longitudinal relative position, and gradually decreases from the longitudinal relative position to the front and rear sides according to the streamer radius.
And then, according to the longitudinal coordinate value in the texture coordinate of each road surface point, judging the distance between the road surface point and the longitudinal relative position where the streamer effect is located. And then, determining the streamer transparency conversion ratio of the road point according to the increasing rate of the streamer transparency at the distance. And then, calculating the product between the streamer transparency conversion ratio of each road surface point and the determined texture transparency of the road surface point to update the texture transparency of the road surface point, so as to obtain the updated texture transparency of the road surface point.
It should be understood that, in order to avoid the mutual influence of the glow effect and the streamer effect within the virtual navigation road, either one of the glow effect and the streamer effect is generally added within the virtual navigation road. Therefore, S340 and S350 in the present application are executed alternatively, but not simultaneously.
And S360, determining the texture rendering value of the road surface point according to the initial texture value designated by the texture coordinate of each road surface point and the texture transparency of the road surface point.
After the texture transparency of each road point is determined, in order to accurately set the texture information of the virtual navigation road, an initial texture value can be designated for the texture coordinate of each road point, for example, a color value is uniformly set for each road point in the navigation route as the initial texture value of each road point. And then, fusing the initial texture value of each road surface point and the transparency of the texture of the road surface point to obtain the texture rendering value of the road surface point.
It should be noted that, at the first rendering time when the virtual navigation road needs to be rendered, the texture coordinates of the road auxiliary point corresponding to each landmark point in the navigation route and each road auxiliary point are set, the landmark points are uniformly set with a color, a predefined transverse transparency transformation relationship and a longitudinal transparency transformation relationship, when a glow effect or a streamer effect exists, a glow radius, a transverse expansion ratio and a preset glow transparency, or a streamer radius and a streamer longitudinal position ratio are additionally set, and the above information is input to the OpenGL interface, so that a virtual navigation road can be rendered for the first time.
Then, at each subsequent rendering time, virtual navigation roads under various road effects can be rendered continuously only by inputting a predefined transverse transparency transformation relation, a longitudinal transparency transformation relation, a transverse expansion ratio of a glow effect or a ratio of a streaming longitudinal position into the OpenGL interface.
And S370, rendering the corresponding virtual navigation road according to the texture rendering value of each road point.
According to the technical scheme provided by the embodiment of the application, for each road sign point in the navigation route, the road auxiliary point corresponding to the road sign point and the texture coordinate of the road auxiliary point are obtained. And then, according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation, the texture rendering value of each road surface point can be determined, so that the corresponding virtual navigation road is rendered, one-time rendering of multiple effects in the virtual navigation road is realized, multiple road effects do not need to be superposed through multiple times of rendering operations, the complicated steps of road rendering are simplified, and the high efficiency and the individuation of road rendering are improved.
Fig. 4 is a schematic block diagram of a road rendering apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus 400 may include:
a road point obtaining module 410, configured to obtain a road assistant point corresponding to each road sign point in the navigation route and texture coordinates of the road assistant point;
a texture determining module 420, configured to determine a texture rendering value of each road point according to the texture coordinate of the road auxiliary point and a predefined road transparency transformation relation;
and a road rendering module 430, configured to render the corresponding virtual navigation road according to the texture rendering value of each road point.
Further, the texture determining module 420 may include:
the interpolation unit is used for interpolating according to the texture coordinates of the road auxiliary points to obtain corresponding road surface points and determining the texture coordinates of each road surface point;
the texture transparency determining unit is used for determining the texture transparency of each road surface point according to the texture coordinate of each road surface point based on the predefined road surface transparency transformation relation;
and the texture rendering value determining unit is used for determining the texture rendering value of the road surface point according to the initial texture value specified by the texture coordinate of each road surface point and the texture transparency of the road surface point.
Further, the texture transparency determining unit may be specifically configured to:
respectively determining the transverse texture transparency and the longitudinal texture transparency of each road surface point according to the texture coordinates of each road surface point based on the predefined transverse transparency transformation relation and longitudinal transparency transformation relation;
and calculating the product of the transparency of the transverse texture, the transparency of the longitudinal texture and the predefined transparency of the road of each road point as the transparency of the texture of the road point.
Further, the road rendering apparatus 400 may further include a texture transparency update module. The texture transparency update module may be configured to:
determining the special effect type of the road auxiliary special effect under the navigation route;
and updating the texture transparency of each road point according to the special effect rendering parameters corresponding to the special effect types.
Further, the texture transparency updating module may be specifically configured to:
if the special effect type is a glow special effect, determining the glow transparency of each road point according to a predefined glow radius, a transverse expansion ratio and a glow preset transparency;
and updating the texture transparency of each road point according to the glow transparency of each road point.
Further, the texture transparency updating module may be further specifically configured to:
if the special effect type is a streamer special effect, determining the streamer transparency conversion ratio of each road point according to the predefined streamer radius and the ratio of the streamer longitudinal positions;
and updating the texture transparency of each road point according to the ratio of the streamer transparency of each road point.
Further, the road point obtaining module 410 may be specifically configured to:
acquiring at least one first road auxiliary point positioned on the left side of each road marking point and at least one second road auxiliary point positioned on the right side of the road marking point;
and the first road auxiliary points and the second road auxiliary points are in one-to-one correspondence.
In the embodiment of the application, for each landmark point in the navigation route, the road auxiliary point corresponding to the landmark point and the texture coordinate of the road auxiliary point are obtained. And then, according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation, the texture rendering value of each road surface point can be determined, so that the corresponding virtual navigation road is rendered, one-time rendering of multiple effects in the virtual navigation road is realized, multiple road effects do not need to be superposed through multiple times of rendering operations, the complicated steps of road rendering are simplified, and the high efficiency and the individuation of road rendering are improved.
It is to be understood that apparatus embodiments and method embodiments may correspond to one another and that similar descriptions may refer to method embodiments. To avoid repetition, further description is omitted here. Specifically, the apparatus 400 shown in fig. 4 may perform any method embodiment in the present application, and the foregoing and other operations and/or functions of each module in the apparatus 400 are respectively for implementing corresponding processes in each method in the embodiment of the present application, and are not described herein again for brevity.
The apparatus 400 of the embodiments of the present application is described above in connection with the figures from the perspective of functional modules. It should be understood that the functional modules may be implemented by hardware, by instructions in software, or by a combination of hardware and software modules. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, and the like, as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 5 is a schematic block diagram of an electronic device shown in an embodiment of the present application.
As shown in fig. 5, the electronic device 500 may include:
a memory 510 and a processor 520, the memory 510 being configured to store a computer program and to transfer the program code to the processor 520. In other words, the processor 520 may call and run a computer program from the memory 510 to implement the method in the embodiment of the present application.
For example, the processor 520 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 520 may include, but is not limited to:
general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
In some embodiments of the present application, the memory 510 includes, but is not limited to:
volatile memory and/or non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), enhanced Synchronous SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules, which are stored in the memory 510 and executed by the processor 520 to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing certain functions, the instruction segments describing the execution of the computer program in the electronic device.
As shown in fig. 5, the electronic device may further include:
a transceiver 530, the transceiver 530 being connectable to the processor 520 or the memory 510.
The processor 520 may control the transceiver 530 to communicate with other devices, and in particular, may transmit information or data to the other devices or receive information or data transmitted by the other devices. The transceiver 530 may include a transmitter and a receiver. The transceiver 530 may further include one or more antennas.
It should be understood that the various components in the electronic device are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. In other words, the present application also provides a computer program product containing instructions, which when executed by a computer, cause the computer to execute the method of the above method embodiments.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application occur, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the module is merely a logical division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method of road rendering, comprising:
acquiring a road auxiliary point corresponding to each road sign point in a navigation route and texture coordinates of the road auxiliary point;
determining a texture rendering value of each road surface point according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation;
and rendering the corresponding virtual navigation road according to the texture rendering value of each road point.
2. The method of claim 1, wherein determining the texture rendering value of each road surface point according to the texture coordinates of the road auxiliary points and a predefined road surface transparency transformation relation comprises:
interpolating to obtain corresponding road surface points according to the texture coordinates of the road auxiliary points, and determining the texture coordinates of each road surface point;
determining the texture transparency of each road surface point according to the texture coordinate of each road surface point based on a predefined road surface transparency transformation relation;
and determining the texture rendering value of each road surface point according to the initial texture value specified by the texture coordinate of each road surface point and the texture transparency of the road surface point.
3. The method of claim 2, wherein determining the texture transparency of each road surface point from its texture coordinates based on a predefined road surface transparency transformation relationship comprises:
respectively determining the transverse texture transparency and the longitudinal texture transparency of each road surface point according to the texture coordinates of each road surface point based on the predefined transverse transparency transformation relation and longitudinal transparency transformation relation;
and calculating the product of the transverse texture transparency, the longitudinal texture transparency and the predefined road transparency of each road point as the texture transparency of the road point.
4. The method of claim 2, further comprising, after determining the texture transparency of each road point from its texture coordinates based on a predefined road transparency transformation relationship:
determining the special effect type of the road auxiliary special effect under the navigation route;
and updating the texture transparency of each road point according to the special effect rendering parameters corresponding to the special effect types.
5. The method according to claim 4, wherein the updating the texture transparency of each road point according to the special effect rendering parameter corresponding to the special effect type comprises:
if the special effect type is a glow special effect, determining the glow transparency of each road point according to a predefined glow radius, a transverse expansion ratio and a glow preset transparency;
and updating the texture transparency of each road point according to the glow transparency of the road point.
6. The method according to claim 4, wherein the updating the texture transparency of each road point according to the special effect rendering parameter corresponding to the special effect type comprises:
if the special effect type is a streamer special effect, determining the streamer transparency conversion ratio of each road point according to the predefined streamer radius and the ratio of the streamer longitudinal positions;
and updating the texture transparency of each road point according to the ratio of the streamer transparency of each road point.
7. The method of any one of claims 1 to 6, wherein the obtaining of the road assistance point corresponding to each waypoint in the navigation route comprises:
acquiring at least one first road auxiliary point positioned on the left side of each road marking point and at least one second road auxiliary point positioned on the right side of the road marking point;
and the first road auxiliary points and the second road auxiliary points are in one-to-one correspondence.
8. A road rendering apparatus, comprising:
the road point acquisition module is used for acquiring a road auxiliary point corresponding to each road sign point in a navigation route and texture coordinates of the road auxiliary point;
the texture determining module is used for determining a texture rendering value of each road point according to the texture coordinate of the road auxiliary point and a predefined road surface transparency transformation relation;
and the road rendering module is used for rendering the corresponding virtual navigation road according to the texture rendering value of each road point.
9. An electronic device, comprising:
a processor and a memory for storing a computer program, the processor being configured to invoke and execute the computer program stored in the memory to perform the road rendering method of any of claims 1-7.
10. A computer-readable storage medium for storing a computer program that causes a computer to execute the road rendering method according to any one of claims 1 to 7.
11. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the road rendering method according to any of claims 1-7.
CN202211552295.4A 2022-12-05 2022-12-05 Road rendering method, device, equipment and storage medium Pending CN115790638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211552295.4A CN115790638A (en) 2022-12-05 2022-12-05 Road rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211552295.4A CN115790638A (en) 2022-12-05 2022-12-05 Road rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115790638A true CN115790638A (en) 2023-03-14

Family

ID=85445826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211552295.4A Pending CN115790638A (en) 2022-12-05 2022-12-05 Road rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115790638A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853677A (en) * 2024-02-29 2024-04-09 腾讯科技(深圳)有限公司 Road edge drawing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853677A (en) * 2024-02-29 2024-04-09 腾讯科技(深圳)有限公司 Road edge drawing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN110062176B (en) Method and device for generating video, electronic equipment and computer readable storage medium
CN108280886A (en) Laser point cloud mask method, device and readable storage medium storing program for executing
JP6768123B2 (en) Augmented reality methods and equipment
WO2014186509A1 (en) Use of map data difference tiles to iteratively provide map data to a client device
US20230005194A1 (en) Image processing method and apparatus, readable medium and electronic device
CN104268911A (en) Method and device for drawing route in map
CN105333883A (en) Navigation path and trajectory displaying method and apparatus for head-up display (HUD)
CN107941226A (en) Method and apparatus for the direction guide line for generating vehicle
CN110174114B (en) Lane line-level path generation method and device and storage medium
CN109544658B (en) Map rendering method and device, storage medium and electronic device
CN105444754A (en) Navigation image display method and device
CN107830869A (en) Information output method and device for vehicle
JP2014521148A (en) Rendering a text image that follows a line
CN115790638A (en) Road rendering method, device, equipment and storage medium
CN104519339A (en) Image processing apparatus and method
CN108805849B (en) Image fusion method, device, medium and electronic equipment
CN109724617B (en) Navigation route drawing method and related equipment
WO2011123710A1 (en) Synthesizing panoramic three-dimensional images
GB2591354A (en) Video frame processing method and apparatus
CN111402364A (en) Image generation method and device, terminal equipment and storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN109598672B (en) Map road rendering method and device
US9846819B2 (en) Map image display device, navigation device, and map image display method
CN112686806B (en) Image splicing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination