CN116012508B - Lane line rendering method, device and storage medium - Google Patents

Lane line rendering method, device and storage medium Download PDF

Info

Publication number
CN116012508B
CN116012508B CN202310308201.7A CN202310308201A CN116012508B CN 116012508 B CN116012508 B CN 116012508B CN 202310308201 A CN202310308201 A CN 202310308201A CN 116012508 B CN116012508 B CN 116012508B
Authority
CN
China
Prior art keywords
virtual camera
lane line
determining
coordinates
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310308201.7A
Other languages
Chinese (zh)
Other versions
CN116012508A (en
Inventor
曹航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202310308201.7A priority Critical patent/CN116012508B/en
Publication of CN116012508A publication Critical patent/CN116012508A/en
Application granted granted Critical
Publication of CN116012508B publication Critical patent/CN116012508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application provides a lane line rendering method, a lane line rendering device and a storage medium, wherein the lane line rendering method comprises the following steps: and determining the shooting direction of the virtual camera according to the target lane line to be displayed. And determining a bounding box containing the target lane line in a virtual plane where the target lane line is located. Coordinates of the viewpoint of the virtual camera in the virtual plane are determined from the bounding box, and a distance between the virtual camera and the viewpoint is determined. And setting rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint so that the bounding box is positioned in the shooting area of the virtual camera. And rendering the target lane line in the screen according to the rendering parameters of the virtual camera and the target lane line. According to the technical scheme, the rendering parameters of the virtual camera can be automatically adjusted so as to render the complete target lane line in the screen, and complicated adjustment operation is avoided.

Description

Lane line rendering method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of high-precision maps, in particular to a lane line rendering method, a lane line rendering device and a storage medium.
Background
In the production process of high-precision maps, there may be lane lines that need to be manually analyzed, and for this portion of lane lines, it is often required to render them on a screen by a virtual camera for the operator to observe.
Currently, in the related art, when a lane line is rendered by a virtual camera, typically, after an operator selects a target position on the lane line to be displayed, a point of view of the virtual camera is moved to the target position, so that a photographing area of the virtual camera covers the lane line to be displayed, wherein the point of view refers to a central position of the photographing area of the virtual camera. However, the present inventors found that, since moving the viewpoint to the target position only moves the photographing direction of the virtual camera based on the original position and the original zoom level of the virtual camera to change the photographing range, moving the viewpoint of the virtual camera to the target position only ensures that the target position is included in the photographing range of the virtual camera, but does not ensure that the lane line to be displayed is completely displayed in the screen, and when the lane line to be displayed is not completely displayed, it is also necessary to manually drag the map and zoom map scale to completely display the lane line to be displayed in the screen, which may cause an increase in the operation cost of the high-precision map and a decrease in the operation efficiency.
Disclosure of Invention
The embodiment of the application provides a lane line rendering method, a lane line rendering device and a computer storage medium, so as to solve the problem of complicated lane line rendering operation through a virtual camera.
In a first aspect, an embodiment of the present application provides a method for rendering a lane line, including:
determining a shooting direction of a virtual camera according to a target lane line to be displayed;
determining a bounding box containing the target lane line in a virtual plane where the target lane line is located;
determining coordinates of a point of view of the virtual camera in the virtual plane according to the bounding box, and determining a distance between the virtual camera and the point of view;
setting rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint, so that the bounding box is positioned in the shooting area of the virtual camera;
and rendering the target lane line in a screen according to the rendering parameters of the virtual camera and the target lane line.
In a second aspect, an embodiment of the present application provides a lane line rendering device, including:
the determining module is used for determining the shooting direction of the virtual camera according to the target lane line to be displayed;
The determining module is further used for determining a bounding box containing the target lane line in a virtual plane where the target lane line is located;
the determining module is further used for determining coordinates of the point of view of the virtual camera in the virtual plane according to the bounding box and determining the distance between the virtual camera and the point of view;
the processing module is used for setting rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the point of view and the distance between the virtual camera and the point of view so that the bounding box is positioned in the shooting area of the virtual camera;
and the rendering module is used for rendering the target lane line in a screen according to the rendering parameters of the virtual camera and the target lane line.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being adapted to perform the method of the first aspect and any of the various possible designs of the first aspect as described above when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect above and any of the various possible designs of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the first aspect and any of the various possible designs of the first aspect.
The embodiment of the application provides a lane line rendering method, a lane line rendering device, a storage medium and a program product, wherein the lane line rendering method comprises the following steps: and determining the shooting direction of the virtual camera according to the target lane line to be displayed. And determining a bounding box containing the target lane line in a virtual plane where the target lane line is located. Coordinates of the viewpoint of the virtual camera in the virtual plane are determined from the bounding box, and a distance between the virtual camera and the viewpoint is determined. And setting rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint so that the bounding box is positioned in the shooting area of the virtual camera. And rendering the target lane line in the screen according to the rendering parameters of the virtual camera and the target lane line. The virtual camera is set to be in a shooting area, and the surrounding frame is positioned in the shooting area of the virtual camera, so that the complete target lane line can be rendered in the screen according to the rendering parameter of the virtual camera and the target lane line, thereby realizing automatic adjustment of the virtual camera to render and display the complete target lane line and avoiding complicated adjustment operation.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic implementation diagram of a label skipping job provided in an embodiment of the present application;
fig. 2 is a schematic parameter diagram of a virtual camera according to an embodiment of the present application;
fig. 3 is a schematic view of a shooting area of a virtual camera according to an embodiment of the present application;
fig. 4 is a flowchart of a method for rendering lane lines according to an embodiment of the present application;
fig. 5 is a second flowchart of a lane line rendering method provided in an embodiment of the present application;
fig. 6 is a schematic implementation diagram of determining a horizontal shooting direction according to an embodiment of the present application;
fig. 7 is a schematic diagram of a direction of displaying a target lane line according to an embodiment of the present disclosure;
fig. 8 is a flowchart III of a lane line rendering method provided in an embodiment of the present application;
fig. 9 is a schematic implementation diagram of a determination bounding box provided in an embodiment of the present application;
Fig. 10 is a schematic diagram of a first implementation of determining a distance between a virtual camera and a viewpoint according to an embodiment of the present application;
fig. 11 is a second implementation schematic diagram for determining a distance between a virtual camera and a viewpoint according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a lane line rendering device according to an embodiment of the present disclosure;
fig. 13 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
For better understanding of the technical solutions of the present application, the related art related to the present application is described in further detail below.
In the image rendering process, user vision is achieved based on a virtual camera performing image rendering, wherein the virtual camera functions to cause a screen formed by image rendering to be displayed in a screen configured by the user device. A brief description of a virtual camera is presented herein, and it will be appreciated that taking a photograph with a camera in the real world is a physical optical process, and the object to be photographed may be any object in the real world, and the mechanism for forming the photograph is mainly that light passes through a lens and then reaches a sensor, and is recorded to obtain the photograph. The virtual camera is a camera simulating the introduction, the object simulating photographing is an existing three-dimensional scene representation, and the mechanism simulating photo generation is a designed algorithm.
That is, the virtual camera may serve as an outgoing point of the user's vision in the virtual scene, where the virtual scene may be understood as a constructed virtual environment, and in this application, the virtual scene may include map content of a high-precision map. Specifically, the virtual camera refers to a video camera constructed in a virtual scene, having corresponding photographing parameters (e.g., angle of view, focal length, etc.) to form a corresponding photographing region, and as the position of the virtual camera changes, the photographing region changes so as to render and display different contents on a screen, wherein the contents located within the photographing region of the virtual camera, that is, the contents that need to be rendered and displayed in the screen.
In the current production process of high-precision maps, there may be some lane lines needing manual analysis or manual further confirmation, and for the lane lines, it is generally required to render the lane lines on a screen by means of a virtual camera so as to facilitate the operator to observe the data.
Currently, when a lane line is rendered by a virtual camera in the related art, a calibration position is usually selected on the lane line to be displayed, wherein the calibration position is often a point in the lane line, for example, may be a point of a starting position of the lane line, or may be a point of an ending position of the lane line, or may also be a position of any point in the lane line, and the calibration position is used for indicating that the currently-calibrated lane line is the lane line to be further analyzed. The target position may be selected by an operator, or may be automatically selected by an algorithm.
After that, a jump job may be performed, that is, the point of view of the virtual camera (the center position of the photographing region of the virtual camera) is moved to the above-determined target position so that the lane line is included in the photographing region of the virtual camera. However, moving the viewpoint to the target position merely moves the shooting direction of the virtual camera to change the shooting range based on the original position and the original zoom level of the virtual camera. Therefore, moving the viewpoint of the virtual camera to the target position can only ensure that the target position is included within the shooting range of the virtual camera, but cannot ensure that the lane line to be displayed is completely displayed in the screen. When the lane line to be displayed is not completely displayed, a high-precision map is further dragged and the screen is zoomed, so that the lane line is completely rendered and displayed in the screen.
Meanwhile, since the photographing direction of the virtual camera is not determined, the effect of the lane lines displayed in the screen cannot be ensured.
For example, the foregoing description may be further understood with reference to fig. 1, and fig. 1 is a schematic implementation diagram of a label skipping job provided in an embodiment of the present application.
As shown in fig. 1, assuming that there is currently a virtual camera shown at 101 in fig. 1, it is understood that the shooting range of the virtual camera is generally present in a manner of a view cone space, wherein the intersection of the view cone space and different planes may form different shooting areas. For example, referring to fig. 1, where the rectangular area indicated at 102 is the photographing area of the virtual camera, the rectangular area indicated at 103 is also the photographing area of the virtual camera, and these two photographing areas are the photographing areas resulting from the intersection of the viewing cone of the virtual camera 101 and the different planes. The photographing region 103 in fig. 1 is assumed to be a photographing region to which the virtual camera 101 corresponds on a plane in which the high-definition map exists.
With further reference to fig. 1, it may be determined that the area indicated by 102 in fig. 1 is a partial area in the high-definition map, and that there is a lane line 105 to be displayed in the area 102, assuming that the target position indicated by 104 in fig. 1 is currently determined for the lane line 105 to indicate that the lane line 105 is the lane line to be displayed.
The viewpoint of the virtual camera 101 can then be moved to the target position 105, and it can be determined with reference to fig. 1 that the viewpoint of the virtual camera 101, that is, the center position of the photographing region 103 of the virtual camera 101. After the viewpoint of the virtual camera 101 is moved to the target position 105, the virtual camera 101 and the high-precision map are positioned in the shooting area 104 generated by the plane, including the lane line indicated by the target position 105.
As can be determined with reference to fig. 1, after the viewpoint of the virtual camera is moved directly to the target position, the lane lines included in the photographing area of the virtual camera may be incomplete, and the effect of rendering and displaying in the screen may be understood with reference to 107 in the drawing, and it can be determined that the geometry of the displayed lane lines is incomplete.
Therefore, after the jump operation is performed, a high-precision map is required to be further manually dragged and the screen is required to be zoomed, so that the lane line to be rendered can be completely displayed in the screen, and the problem that the operation is complicated when the lane line is rendered and displayed through the virtual camera exists at present.
Aiming at the problems in the prior art, the application provides the following technical conception: according to the position and trend of the lane lines to be displayed, the position and shooting direction of the virtual camera are automatically adjusted, so that the shooting area corresponding to the adjusted virtual camera can contain complete lane lines, and therefore operation steps of rendering and displaying the lane lines by the virtual camera can be effectively saved.
Based on the above description, the following describes the lane line rendering method provided in the present application in detail with reference to specific embodiments. It should be noted that, in this application, the execution main body of each embodiment may be a device with a data processing function, such as a server, a processor, a chip, etc., and the implementation manner of the specific execution main body is not limited in this embodiment, and may be selected and set according to actual needs.
Before describing a specific embodiment, several parameters for determining the position and shooting direction of a virtual camera in the map scene of the present application will be described with reference to fig. 2 and 3. Fig. 2 is a schematic parameter diagram of a virtual camera provided in an embodiment of the present application, and fig. 3 is a schematic shooting area diagram of the virtual camera provided in an embodiment of the present application.
The shooting parameters of the virtual camera may include coordinates of a point of view of the virtual camera, a distance between the virtual camera and the point of view, a horizontal shooting direction of the virtual camera, and a pitching shooting direction of the virtual camera.
As shown in fig. 2, it is assumed that there is currently a virtual space corresponding to a preset coordinate system composed of an N axis, an E axis and an H axis perpendicular to each other, which is illustrated in fig. 2, wherein a virtual plane in which a target lane line is located is a plane composed of the N axis and the E axis in the preset coordinate system.
The N-E-H coordinate system may be briefly described herein, and when map data is processed, for example, the earth may be subjected to a kava projection to obtain a plane, then the X axis in the world coordinate system corresponding to the earth may be directed to the east (east), the corresponding E axis in the current coordinate system, and the Y axis in the world coordinate system corresponding to the earth may be directed to the north (north), the corresponding N axis in the current coordinate system, and the H axis perpendicular to the projection plane may be correspondingly represented as altitude. The map data processing can then be performed based on the N-E-H coordinate system.
The coordinates of the viewpoint of the virtual camera are the coordinates of the viewpoint 201 in the virtual plane illustrated in fig. 2, and the distance between the virtual camera and the viewpoint is the distance illustrated by 202 in fig. 2.
And the horizontal shooting direction of the virtual camera is actually indicated by 203 in fig. 2, in one possible implementation, for example, the angle (clockwise direction) between the projection of the line of the virtual camera and the viewpoint on the virtual plane and the north direction (i.e. the H-axis direction) may be used to represent the horizontal shooting direction, that is, the angle β illustrated in fig. 2, which may be defined as the head (facing angle). Alternatively, the remaining included angle may be used to represent the horizontal shooting direction, so long as the included angle may represent the direction indicated by 203 in fig. 2.
And the pitch shooting direction of the virtual camera is actually the direction indicated by 204 in fig. 2, in one possible implementation, for example, the pitch shooting direction may be represented by an angle between the line connecting the virtual camera and the viewpoint and the H axis, that is, an angle λ illustrated in fig. 2, which may be defined as tilt (tilt angle), for example. Alternatively, the remaining angle may be used to indicate the pitch shooting direction, so long as the angle indicates the direction indicated by 204 in fig. 2.
Based on the parameters of the virtual camera determined in fig. 2, the rendering parameters of the virtual camera can be determined, and then the rendering parameters of the virtual camera can be set according to the parameters, so that the shooting area of the virtual camera in the virtual plane can be determined, for example, referring to fig. 3, and after the rendering parameters of the virtual camera are set according to the parameters illustrated in fig. 2, the shooting area of the virtual camera in the virtual plane is the area illustrated by 301 in fig. 3.
On the basis of the above description, the method for rendering the lane lines provided in the present application is described below with reference to fig. 4, and fig. 4 is a flowchart of the method for rendering the lane lines provided in the embodiment of the present application.
As shown in fig. 4, the method includes:
s401, determining the shooting direction of the virtual camera according to the target lane line to be displayed.
In this embodiment, the target lane needs to be rendered and displayed, so that the target lane to be displayed may be determined first, and in one possible implementation, for example, the relevant operation team may make a pre-marking on the lane to be displayed, and then the marked lane may be understood as the target lane.
In this embodiment, the shooting direction of the virtual camera may be determined according to the target lane line to be displayed, and it may be understood that, assuming that the virtual camera is fixed at a position, the virtual camera may also adjust the shooting direction at the fixed position, which corresponds to the shooting direction described in this embodiment. The shooting directions may include, for example, the horizontal shooting direction and the pitch shooting direction described above.
For example, the shooting direction of the virtual camera can be determined according to the lane line trend of the target lane line to be displayed, so that the shooting direction of the virtual camera is consistent with the lane line trend, the lane line displayed in the screen accords with the lane line trend, namely accords with the lane line understood in a common sense, and the observation of operators is facilitated.
S402, determining a bounding box containing the target lane line in a virtual plane where the target lane line is located.
It will be appreciated that in this embodiment, the high-precision map exists on a plane, and then the target lane line in the corresponding high-precision map exists on the plane, and in this embodiment, the plane on which the target lane line exists is understood as a virtual plane, and then the bounding box containing the target lane line may be determined in the virtual plane on which the target lane line exists. Wherein the bounding box may completely enclose the target lane line.
S403, determining coordinates of a point of view of the virtual camera in the virtual plane according to the bounding box, and determining a distance between the virtual camera and the point of view.
After determining the bounding box, the coordinates of the viewpoint of the virtual camera in the virtual plane may be determined according to the bounding box, and it is understood that the viewpoint is the center point of the shooting area of the virtual camera in the virtual plane. And in this embodiment, the distance between the virtual camera and the viewpoint may be determined according to the bounding box.
S404, setting rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the point of view and the distance between the virtual camera and the point of view, so that the bounding box is located in the shooting area of the virtual camera.
For the virtual camera, the shooting direction of the virtual camera, the position of the point of view of the virtual camera, and the distance between the virtual camera and the point of view may be combined to determine the rendering parameters of the virtual camera, so in this embodiment, the rendering parameters of the virtual camera may be set according to the determined shooting direction of the virtual camera, the coordinates of the point of view of the virtual camera, and the distance between the virtual camera and the point of view. In one possible implementation, the rendering parameters may include, for example, a position of the virtual camera and a shooting direction of the virtual camera.
The set virtual camera forms a photographing region in the virtual plane, and in this embodiment, the photographing region formed by the set virtual camera in the virtual plane is a bounding box containing the target track line, that is, the bounding box is located within the photographing region of the virtual camera. Based on the above description, it can be determined that the content in the shooting area of the virtual camera is the content to be rendered in the screen, so that when the bounding box is located in the shooting area of the virtual camera, it can be ensured that the target track line is displayed completely in the screen.
S405, rendering the target lane lines in the screen according to the rendering parameters of the virtual camera and the target lane lines.
After the setting of the virtual camera is completed, in this embodiment, image rendering may be performed according to the rendering parameters of the virtual camera and the target lane line, so that the target lane line is rendered in the screen. It will be appreciated that the image rendering of the virtual camera is performed with respect to the content included in the shooting range of the virtual camera, so that a corresponding image is displayed in the screen. Based on the above description, it can be determined that, because the bounding box of the target lane line is included in the photographing region of the virtual camera, rendering of the complete target lane line in the screen can be achieved.
The lane line rendering method provided by the embodiment of the application comprises the following steps: and determining the shooting direction of the virtual camera according to the target lane line to be displayed. And determining a bounding box containing the target lane line in a virtual plane where the target lane line is located. Coordinates of the viewpoint of the virtual camera in the virtual plane are determined from the bounding box, and a distance between the virtual camera and the viewpoint is determined. And setting rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint so that the bounding box is positioned in the shooting area of the virtual camera. And rendering the target lane line in the screen according to the rendering parameters of the virtual camera and the target lane line. The virtual camera is set to be in a shooting area, and the surrounding frame is positioned in the shooting area of the virtual camera, so that the complete target lane line can be rendered in the screen according to the rendering parameter of the virtual camera and the target lane line, thereby realizing automatic adjustment of the virtual camera to render and display the complete target lane line and avoiding complicated adjustment operation.
Based on the above description, the implementation manner of determining the shooting direction of the virtual camera in the present application is described in further detail below. The shooting directions of the virtual camera may include, for example, the horizontal shooting direction and the pitch shooting direction described above.
In one possible implementation, for example, the horizontal shooting direction of the virtual camera may be determined according to the target lane line to be displayed, so that the horizontal shooting direction of the virtual camera coincides with the lane line start direction of the target lane line. Next, an implementation manner of determining a horizontal shooting direction will be described with reference to fig. 5 to 7, fig. 5 is a flowchart two of a method for rendering a lane line provided in an embodiment of the present application, fig. 6 is a schematic implementation diagram of determining a horizontal shooting direction provided in an embodiment of the present application, and fig. 7 is a schematic direction diagram of displaying a target lane line provided in an embodiment of the present application.
As shown in fig. 5, the method includes:
s501, acquiring coordinates of an ith lane line point in a virtual plane and coordinates of a jth lane line point in the virtual plane in a starting section of the target lane line.
In this embodiment, a starting segment of the target lane line may be determined, where the target lane line may include a plurality of lane line points, and the starting segment may be, for example, a segment formed by t lane line points that are located in front of the target lane line, where t is an integer greater than or equal to 1, and a specific value of t may be selected and set according to actual requirements. For example, the starting section of the lane line may be determined according to the traffic direction of the lane line, and the preceding t lane line points in the above-described target lane line, that is, the preceding t lane line points determined according to the traffic direction of the lane line.
The coordinates of the i-th lane line point in the start segment in the virtual plane and the coordinates of the j-th lane line point in the start segment in the virtual plane may be acquired in the start segment of the target lane line. Wherein i is less than j, and i and j are integers greater than or equal to 1.
For example, assuming that the first 10 lane points in the target lane form the start segment, the coordinates of the 1 st lane point and the coordinates of the 2 nd lane point in the start segment of the target lane can be obtained, for example. Or the 1 st lane line point and the 3 rd lane line point can also be obtained, etc. The specific selection of i and j is not limited in this embodiment, as long as the lane line points corresponding to i and j are selected to be located at the start portion of the target lane line.
S502, determining the inclination angle of the connecting line of the ith lane line point and the jth lane line point according to the coordinates of the ith lane line point in the virtual plane and the coordinates of the jth lane line point in the virtual plane.
Then, the inclination angle of the line connecting the ith lane line point and the jth lane line point can be determined according to the coordinates of the ith lane line point in the virtual plane and the coordinates of the jth lane line point in the virtual plane. Based on the above description, it may be determined that the virtual plane in which the target lane line in the present embodiment is located is a plane formed by the N axis and the E axis in the preset coordinate system, and the inclination angle in the present embodiment may be understood as an angle between the line of the i-th lane line point and the j-th lane line point and the E axis.
In one possible implementation, for example, the selected i-th lane line point may be expressed as
Figure SMS_1
Its corresponding coordinates are denoted +.>
Figure SMS_2
Wherein->
Figure SMS_3
Corresponding to the lane line point +.>
Figure SMS_4
Coordinate value corresponding to E axis,>
Figure SMS_5
corresponding to the lane line point +.>
Figure SMS_6
Coordinate values corresponding to the N-axis.
The selected jth lane line point may be expressed as
Figure SMS_7
Its corresponding coordinates are denoted +.>
Figure SMS_8
Wherein->
Figure SMS_9
Corresponding to the lane line point +.>
Figure SMS_10
Coordinate value corresponding to E axis,>
Figure SMS_11
corresponding to the lane line point +.>
Figure SMS_12
Coordinate values corresponding to the N-axis.
The inclination angle of the line between the i-th lane line point and the j-th lane line point can be calculated by, for example, the following formula one:
Figure SMS_13
the above formula is used for solving the inclination angle of the connecting line between two points
Figure SMS_14
Is a formula of (2).
For example, it can be understood with reference to FIG. 6 that in the illustration of FIG. 6, the direction indicated by the y-axis is the direction of the tilt angle, and that in FIG. 6
Figure SMS_15
The angle represented is the tilt angle currently introduced.
S503, determining a direction corresponding to the tilt angle as a horizontal shooting direction of the virtual camera.
After determining the tilt angle, in this embodiment, the direction corresponding to the tilt angle may be determined as the horizontal shooting direction of the virtual camera, where the direction corresponding to the tilt angle is the starting direction of the target lane line, so in this embodiment, the horizontal shooting direction of the virtual camera is consistent with the lane line starting direction of the target lane line, so that the effect that the target lane line appears on the screen may be ensured, that is, the starting position of the target lane line is located below the screen, and the target lane line is displayed in the screen from bottom to top along the direction of the target lane line.
For example, as can be understood with reference to fig. 7, when the horizontal photographing direction of the virtual camera coincides with the start direction of the target lane line, the effect illustrated by 701 in fig. 7 is exhibited, that is, the start position of the target lane line is located below the screen, and the target lane line is displayed in the screen from bottom to top along the direction of the target lane line.
In contrast, it can be understood in conjunction with 702 in fig. 7 that when the horizontal photographing direction of the virtual camera is not coincident with (say, perpendicular to) the start direction of the target lane line, the effect illustrated in 702 in fig. 7 is exhibited, that is, the start position of the target lane line is located at the side of the screen, and the target lane line is displayed in the screen from left to right along the direction of the target lane line.
As can be understood from comparing 701 and 702 in fig. 7, when the horizontal photographing direction of the virtual camera is consistent with the starting direction of the target lane line, the target lane line rendered in the screen is consistent with the direction understood in the normal sense, so by setting the horizontal photographing direction of the virtual camera to be consistent with the lane line starting direction of the target lane line, the direction represented in the screen by the lane line rendered based on the virtual camera completed by the setting can be effectively correct.
And it can be determined based on the above description that, in this embodiment, the first included angle between the projection of the line of the virtual camera and the viewpoint on the virtual plane and the N axis may be used to indicate the horizontal shooting direction, and the tilt angle described above is the included angle with the E axis, so in this embodiment, the first included angle may be further solved by the following formula, for example:
Figure SMS_16
Figure SMS_17
that is, on the basis of the inclination angle, a first included angle of the direction corresponding to the inclination angle and the N axis can be obtained by adding 90 degrees>
Figure SMS_18
Corresponding to the example of fig. 6, a head angle may be used to represent the horizontal shooting direction of the virtual camera.
The determination manner of the horizontal shooting direction of the virtual camera is described above, and the implementation manner of determining the pitching shooting angle of the virtual camera is described below.
Based on the above description, it may be determined that the second included angle between the line connecting the virtual camera and the viewpoint and the H axis may be used to indicate the pitch shooting direction, and then for example, the second included angle tilt may be set to 0, so that the pitch shooting direction of the virtual camera faces the virtual plane where the target lane line is located and is perpendicular to the virtual plane where the target lane line is located.
It can be understood that by setting the pitching shooting direction of the virtual camera to face the virtual plane where the target lane line is located and to be perpendicular to the virtual plane where the target lane line is located, the virtual camera can shoot the target lane line in a vertical direction, so that the target lane line can be displayed in the screen to the greatest extent.
The above describes an implementation of determining the horizontal shooting direction and the pitch shooting direction of the virtual camera, which may determine the shooting direction of the virtual camera. The implementation of determining the viewpoint coordinates of the virtual camera and the distance between the virtual camera and the viewpoint is described in further detail below in conjunction with fig. 8-9. Fig. 8 is a flowchart III of a lane line rendering method provided in an embodiment of the present application, and fig. 9 is a schematic implementation diagram of a determination bounding box provided in an embodiment of the present application.
As shown in fig. 8, the method includes:
s801, determining the y-axis in the horizontal shooting direction in the virtual plane, and determining the x-axis in the direction perpendicular to the horizontal shooting direction in the virtual plane, results in a construction coordinate system located on the virtual plane.
After determining the horizontal shooting direction of the virtual camera described above, referring to fig. 9, it is possible in the present embodiment to determine the y-axis in the horizontal shooting direction in the virtual plane and to determine the x-axis in the direction perpendicular to the horizontal shooting direction in the virtual plane, resulting in a construction coordinate system located on the virtual plane, that is, the x-y coordinate system illustrated in fig. 9.
S802, determining coordinates of each lane line point in the target lane line in a construction coordinate system.
After the construction coordinate system is determined, the coordinates of the respective lane line points in the target lane line in the construction coordinate system may be determined in the present embodiment. For example, the first lane line point
Figure SMS_19
The coordinates in the constructed coordinate system of (c) can be expressed as
Figure SMS_20
And a second lane line point +.>
Figure SMS_21
The coordinates in the constructed coordinate system of (c) can be expressed as
Figure SMS_22
And so on.
S803, determining a minimum abscissa value, a maximum abscissa value, a minimum ordinate value and a maximum ordinate value in the coordinates corresponding to each lane line point.
And then, determining the minimum abscissa value, the maximum abscissa value, the minimum ordinate value and the maximum ordinate value in the coordinates corresponding to each lane line point.
Wherein the minimum abscissa value may be expressed as:
Figure SMS_23
where k is the number of lane line points in the target lane line, +.>
Figure SMS_24
I.e. the minimum abscissa value.
And, the maximum abscissa value mayExpressed as:
Figure SMS_25
it->
Figure SMS_26
The maximum abscissa value.
And, the minimum ordinate value may be expressed as:
Figure SMS_27
it->
Figure SMS_28
I.e. the minimum ordinate value.
And, the maximum ordinate value may be expressed as:
Figure SMS_29
It->
Figure SMS_30
I.e. the maximum ordinate value.
S804, determining coordinates of all vertexes of the bounding box in a construction coordinate system according to the minimum abscissa value, the maximum abscissa value, the minimum ordinate value and the maximum ordinate value, and determining the bounding box according to the coordinates of all vertexes.
The coordinates of each vertex of the bounding box in the construction coordinate system can be determined based on the minimum abscissa value, the maximum abscissa value, the minimum ordinate value and the maximum ordinate value, for example, the coordinates of 4 vertices of the bounding box can be respectively expressed as
Figure SMS_31
、/>
Figure SMS_32
、/>
Figure SMS_33
Figure SMS_34
Then, according to the coordinates of each vertex in the construction coordinate system,a bounding box is determined.
For example, as can be understood with reference to fig. 9, the bounding box shown at 901 in fig. 9 may be determined in the construction coordinate system, and the target lane line may be included in the bounding box.
S805, determining coordinates of the central position of the bounding box in the construction coordinate system according to the coordinates of each vertex of the bounding box in the construction coordinate system.
After determining the bounding box, the coordinates of the center position of the bounding box in the build coordinate system may be determined from the coordinates of the respective vertices of the bounding box in the build coordinate system. For example, the abscissa of the center position of the bounding box in the construction coordinate system can be expressed as
Figure SMS_35
The ordinate of the center position of the bounding box in the construction coordinate system can be expressed as +.>
Figure SMS_36
S806, determining the coordinates of the central position of the bounding box in the preset coordinate system according to the coordinates of the central position of the bounding box in the construction coordinate system and the coordinate conversion relation between the construction coordinate system and the preset coordinate system.
In this embodiment, the center position of the bounding box may be determined as the position of the viewpoint of the virtual camera, so that the target lane line included in the bounding box may be laid out on the screen. However, since the coordinates of the center position of the bounding box are coordinates in the construction coordinate system, the position of the virtual camera is set in the preset coordinate system at the time of setting the position of the virtual camera in the present embodiment.
Therefore, in this embodiment, the coordinates of the central position of the bounding box in the configuration coordinate system are further converted according to the conversion relationship between the configuration coordinate system and the preset coordinate system, so as to obtain the coordinates of the central position of the bounding box in the preset coordinate system.
S807, determining coordinates of the center position of the bounding box in a preset coordinate system as coordinates of a viewpoint of the virtual camera in the virtual plane.
And then determining the coordinates of the central position of the bounding box in a preset coordinate system as the coordinates of the viewpoint of the virtual camera in the virtual plane, so that the coordinates of the viewpoint of the virtual camera in the virtual plane can be obtained effectively.
In this embodiment, by determining the coordinates of the center position of the bounding box under the construction coordinate system, and determining the coordinates of the center position of the bounding box under the preset coordinate system according to the conversion relation of the construction coordinate system and the preset coordinate system, and then determining the coordinates of the center position of the bounding box under the preset coordinate system as the coordinates of the viewpoint of the virtual camera, the viewpoint of the virtual camera in the virtual plane can be set to coincide with the center position of the bounding box of the target lane line, so that the target lane line can be fully covered with the screen to the maximum extent.
On the basis of the above description, an implementation of determining the distance between the virtual camera and the viewpoint is described below in conjunction with fig. 10 and 11. Fig. 10 is a schematic diagram of a first implementation of determining a distance between a virtual camera and a point of view according to an embodiment of the present application, and fig. 11 is a schematic diagram of a second implementation of determining a distance between a virtual camera and a point of view according to an embodiment of the present application.
The length of the bounding box in the y-axis direction and the length of the bounding box in the x-axis direction can be determined in this embodiment. For example, in the example of fig. 10, the length of the bounding box in the y-axis direction is the length indicated by a in fig. 10, and the length of the bounding box in the x-axis direction is the length indicated by b in fig. 10.
The length of the bounding box in the y-axis direction and the length of the bounding box in the x-axis direction may then be compared, and in one possible implementation, if the length in the y-axis direction is greater than or equal to the length in the x-axis direction, the ratio of half the length in the y-axis direction to the tangent value corresponding to the target angle is determined as the distance between the virtual camera and the viewpoint, where the target angle is half the field angle of the virtual camera.
For example, as can be appreciated in connection with FIG. 10, it is illustrated in FIG. 10 that the length of the bounding box in the y-axis direction is greater than in the x-axis direction, assuming that the use of FIG. 10
Figure SMS_37
Representing half the length of the bounding box in the y-axis direction. In one possible implementation, the origin of the construction coordinate system and the center position of the bounding box are coincident, so that half of the length in the y-axis direction of the bounding box in FIG. 10 +.>
Figure SMS_38
Practically equal to->
Figure SMS_39
. And fov illustrated in fig. 10 is the field angle of the virtual camera.
Based on the above description, it can be determined that the viewpoint of the virtual camera and the center position of the bounding box are coincident, and the pitch shooting angle of the virtual camera is perpendicular to the virtual plane, because the viewpoint of the virtual camera is the center position of the shooting range, the virtual camera is actually located directly above the viewpoint, that is, the line connecting the virtual camera and the viewpoint is perpendicular to the virtual plane. Then sides 1002, 1003 and 1004 in fig. 10 form a right triangle, where the angle between sides 1002 and 1004 is half the angle fov of the virtual camera described above, and the angle between sides 1002 and 1003 is a right angle.
It will be appreciated that the length of side 1002 and the length of side 1003 are tangential to the half of the field angle of the virtual camera, i.e
Figure SMS_40
Because of->
Figure SMS_41
And the angle fov are known, then based on the mathematical relationship in this right triangle, the ratio of half the length in the y-axis direction to the tangent value corresponding to the target angle, which is half the angle of view of the virtual camera, can be determined as the distance between the virtual camera and the viewpoint.
For example, the expression can be expressed as the following formula two:
Figure SMS_42
wherein the method comprises the steps of
Figure SMS_43
Is the tangent value corresponding to the target angle, < ->
Figure SMS_44
Is half the length of the bounding box in the y-axis direction,/->
Figure SMS_45
Is the distance between the virtual camera and the point of view. It will be appreciated that when the origin of the construction coordinate system and the central position of the bounding box coincide, +.>
Figure SMS_46
Is equal to->
Figure SMS_47
. When the origin of the coordinate system is not coincident with the center of the bounding box, the length of the bounding box in the y-axis direction can be determined first, and then +.>
Figure SMS_48
In summary->
Figure SMS_49
Is a known value.
In another possible implementation, if the length in the y-axis direction is smaller than the length in the x-axis direction, the ratio of half the length in the x-axis direction to the tangent value corresponding to the target angle may be determined as the distance between the virtual camera and the viewpoint.
For example, it can be described with reference to fig. 11, in which the length of the bounding box in the y-axis direction is smaller than that in the x-axis direction is shown in fig. 11, assuming that the configuration of fig. 11 is adopted
Figure SMS_50
Representing half the length of the bounding box in the x-axis direction.In one possible implementation, the origin of the construction coordinate system and the center position of the bounding box are coincident, so that half the length in the x-axis direction of the bounding box in fig. 11 is virtually equal to +.>
Figure SMS_51
. And fov illustrated in fig. 11 is the field angle of the virtual camera.
Similar to the description above, the sides 1102, 1103 and 1104 in fig. 11 form a right triangle, where the angle between the sides 1102 and 1104 is half the angle fov of the virtual camera described above, and the angle between the sides 1102 and 1103 is a right angle.
It will be appreciated that the length of the side 1102 and the length of the side 1103 are tangential to the virtual camera's field of view corresponding to half of the angle, i.e
Figure SMS_52
Because of->
Figure SMS_53
And the angle fov are known, then based on the mathematical relationship in this right triangle, the ratio of half the length in the x-axis direction to the tangent value corresponding to the target angle, which is half the angle of view of the virtual camera, can be determined as the distance between the virtual camera and the viewpoint.
For example, the expression can be expressed as the following formula three:
Figure SMS_54
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_55
is the tangent value corresponding to the target angle, +.>
Figure SMS_56
Is half the length of the bounding box in the x-axis direction,/->
Figure SMS_57
Is the distance between the virtual camera and the point of view. It will be appreciated that when the origin of the construction coordinate system and the central position of the bounding box coincide, +.>
Figure SMS_58
Is equal to->
Figure SMS_59
. When the origin of the coordinate system is not coincident with the center of the bounding box, the length of the bounding box in the y-axis direction can be determined first, and then +.>
Figure SMS_60
In summary->
Figure SMS_61
Is a known value.
In this embodiment, the distance between the virtual camera and the viewpoint can be simply and effectively obtained by solving the trigonometric function relationship existing between the connection line between the virtual camera and the viewpoint and the length of the bounding box, and the shooting range of the virtual camera set based on the distance can be ensured to include the bounding box, so that the target lane line can be effectively ensured to be displayed in the screen completely. And when the distance between the virtual camera and the viewpoint is determined based on the trigonometric function relation described above, the length of the bounding box in the y-axis direction and the length of the bounding box in the x-axis direction are compared first, and the distance between the virtual camera and the viewpoint is determined according to a longer length, so that it can be ensured that the shooting range corresponding to the virtual camera set based on the determined distance can include the complete target lane line. In addition, it should be noted that, in this embodiment, by determining the construction coordinate system, and then determining the bounding box of the target lane line in the construction coordinate system, the length information of the bounding box can be simply and conveniently determined, and then the distance between the virtual camera and the viewpoint can be simply and effectively obtained by solving according to the trigonometric function relationship between the corresponding length information.
Based on the horizontal shooting direction, the pitching shooting direction, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint of the virtual camera described above, the rendering parameters of the virtual camera can be set in the virtual space formed by the preset coordinate system, wherein the position of the virtual camera is located above the virtual plane in the virtual space formed by the preset coordinate system. Through the virtual camera and the target lane line set by the rendering parameters described in the above embodiments, a complete target lane line can be rendered in the screen, and the direction of the screen from bottom to top is consistent with the starting direction of the lane line, so that the camera parameters for observing the lane line in the optimal posture can be effectively and automatically determined, and further complicated operations in the lane line rendering process are avoided.
Fig. 12 is a schematic structural diagram of a lane line rendering device according to an embodiment of the present application. As shown in fig. 12, the apparatus 120 includes: a determination module 1201, a processing module 1202, and a rendering module 1203.
A determining module 1201, configured to determine a shooting direction of the virtual camera according to a target lane line to be displayed;
the determining module 1201 is further configured to determine, in a virtual plane in which the target lane line is located, a bounding box including the target lane line;
The determining module 1201 is further configured to determine coordinates of a viewpoint of the virtual camera in the virtual plane according to the bounding box, and determine a distance between the virtual camera and the viewpoint;
a processing module 1202, configured to set rendering parameters of a virtual camera according to a shooting direction of the virtual camera, coordinates of a viewpoint, and a distance between the virtual camera and the viewpoint, so that the bounding box is located in a shooting area of the virtual camera;
the rendering module 1203 is configured to render the target lane line in a screen according to the rendering parameters of the virtual camera and the target lane line.
In one possible design, the determining module 1201 is specifically configured to:
determining a horizontal shooting direction of a virtual camera according to a target lane line to be displayed, wherein the horizontal shooting direction of the virtual camera is consistent with a lane line starting direction of the target lane line;
determining a pitching shooting direction of the virtual camera, wherein the pitching shooting direction of the virtual camera faces towards and is perpendicular to a virtual plane in which the target lane line is located;
the shooting directions comprise a horizontal shooting direction and a pitching shooting direction.
In one possible design, the determining module 1201 is specifically configured to:
Acquiring coordinates of an ith lane line point in a virtual plane and coordinates of a jth lane line point in the virtual plane in a starting section of a target lane line, wherein i is smaller than j, and i and j are integers larger than or equal to 1;
determining the inclination angle of the connecting line of the ith lane line point and the jth lane line point according to the coordinates of the ith lane line point in the virtual plane and the coordinates of the jth lane line point in the virtual plane;
the direction corresponding to the tilt angle is determined as the horizontal shooting direction of the virtual camera.
In one possible design, the virtual plane in which the target lane line is located is a plane formed by an N axis and an E axis in a preset coordinate system, and the preset coordinate system further includes an H axis perpendicular to the virtual plane;
the projection of the connecting line of the virtual camera and the viewpoint on the virtual plane and a first included angle of the N axis are used for indicating the horizontal shooting direction, and the second included angle of the connecting line of the virtual camera and the viewpoint and the H axis are used for indicating the pitching shooting direction;
the first included angle is the angle corresponding to the inclination angle plus 90 degrees, and the second included angle is 0 degrees.
In one possible design, the determining module 1201 is specifically configured to:
determining a y-axis in a horizontal shooting direction in the virtual plane, and determining an x-axis in a direction perpendicular to the horizontal shooting direction in the virtual plane, to obtain a construction coordinate system located on the virtual plane;
Determining coordinates of each lane line point in the target lane line in a construction coordinate system;
determining a minimum abscissa value, a maximum abscissa value, a minimum ordinate value and a maximum ordinate value in the coordinates corresponding to each lane line point;
and determining the coordinates of each vertex of the bounding box in a construction coordinate system according to the minimum abscissa value, the maximum abscissa value, the minimum ordinate value and the maximum ordinate value, and determining the bounding box according to the coordinates of each vertex.
In one possible design, the determining module 1201 is specifically configured to:
determining coordinates of the central position of the bounding box in the construction coordinate system according to the coordinates of the vertexes of the bounding box in the construction coordinate system;
and determining the coordinates of the viewpoint of the virtual camera in the virtual plane according to the coordinates of the central position of the bounding box in the construction coordinate system.
In one possible design, the determining module 1201 is specifically configured to:
determining the coordinates of the central position of the bounding box in a preset coordinate system according to the coordinates of the central position of the bounding box in the construction coordinate system and the coordinate conversion relation between the construction coordinate system and the preset coordinate system;
and determining the coordinates of the central position of the bounding box in a preset coordinate system as the coordinates of the viewpoint of the virtual camera in the virtual plane.
In one possible design, the determining module 1201 is specifically configured to:
determining the length of the bounding box in the y-axis direction and the length of the bounding box in the x-axis direction;
if the length in the y-axis direction is greater than or equal to the length in the x-axis direction, determining the ratio of half of the length in the y-axis direction to the tangent value corresponding to the target angle as the distance between the virtual camera and the viewpoint, wherein the target angle is half of the angle of view of the virtual camera;
if the length in the y-axis direction is smaller than the length in the x-axis direction, determining the ratio of half of the length in the x-axis direction to the tangent value corresponding to the target included angle as the distance between the virtual camera and the viewpoint.
In one possible design, the processing module 1202 is specifically configured to:
setting rendering parameters of the virtual camera in a virtual space formed by a preset coordinate system according to the shooting direction of the virtual camera, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint, wherein the rendering parameters comprise the position of the virtual camera and the shooting direction of the virtual camera;
wherein, in the virtual space formed by the preset coordinate system, the position of the virtual camera is positioned above the virtual plane.
The device provided in this embodiment may be used to implement the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Fig. 13 is a schematic hardware structure of an electronic device provided in the embodiment of the present application, as shown in fig. 13, an electronic device 130 in the embodiment includes: a processor 1301 and a memory 1302; wherein the method comprises the steps of
A memory 1302 for storing computer-executable instructions;
a processor 1301 for executing computer-executable instructions stored in a memory to implement the steps executed by the lane line rendering method in the above embodiment. Reference may be made in particular to the relevant description of the embodiments of the method described above.
Alternatively, memory 1302 may be separate or integrated with processor 1301.
When the memory 1302 is provided separately, the electronic device further comprises a bus 1303 for connecting the memory 1302 and the processor 1301.
The embodiment of the application also provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and when a processor executes the computer execution instructions, the method for rendering the lane lines executed by the electronic equipment is realized.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards, and provide corresponding operation entries for the user to select authorization or rejection.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods described in the embodiments of the present application.
It should be understood that the above processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of rendering lane lines, comprising:
determining a horizontal shooting direction of a virtual camera according to a target lane line to be displayed, wherein the horizontal shooting direction of the virtual camera is consistent with a lane line starting direction of the target lane line;
determining a pitching shooting direction of the virtual camera, wherein the pitching shooting direction of the virtual camera faces to a virtual plane where the target lane line is located and is perpendicular to the virtual plane where the target lane line is located;
determining a bounding box containing the target lane line in a virtual plane where the target lane line is located;
Determining coordinates of a point of view of the virtual camera in the virtual plane according to the bounding box, and determining a distance between the virtual camera and the point of view;
setting rendering parameters of the virtual camera according to the horizontal shooting direction and the pitching shooting direction of the virtual camera, the coordinates of the viewpoint and the distance between the virtual camera and the viewpoint, so that the bounding box is positioned in a shooting area of the virtual camera;
and rendering the target lane line in a screen according to the rendering parameters of the virtual camera and the target lane line.
2. The method of claim 1, wherein the virtual plane in which the target lane line is located is a plane composed of an N axis and an E axis in a preset coordinate system, and the preset coordinate system further includes an H axis perpendicular to the virtual plane;
the determining the horizontal shooting direction of the virtual camera according to the target lane line to be displayed includes:
acquiring coordinates of an ith lane line point in the virtual plane and coordinates of a jth lane line point in the virtual plane in a starting section of the target lane line, wherein i is smaller than j, and the i and j are integers larger than or equal to 1;
Determining an inclination angle according to the coordinates of an ith lane line point in the virtual plane and the coordinates of a jth lane line point in the virtual plane, wherein the inclination angle is an included angle between a connecting line of the ith lane line point and the jth lane line point and the E axis;
and determining the direction corresponding to the inclination angle as the horizontal shooting direction of the virtual camera.
3. The method of claim 2, wherein a first angle between a projection of a line of the virtual camera with the viewpoint on the virtual plane and the N-axis is used to indicate the horizontal shooting direction, and a second angle between the line of the virtual camera with the viewpoint and the H-axis is used to indicate the pitch shooting direction;
the first included angle is an angle corresponding to the inclined angle plus 90 degrees, and the second included angle is 0 degrees.
4. A method according to any one of claims 2-3, wherein determining a bounding box containing the target lane line in a virtual plane in which the target lane line is located comprises:
determining a y-axis in a horizontal shooting direction in the virtual plane, and determining an x-axis in a direction perpendicular to the horizontal shooting direction in the virtual plane, to obtain a construction coordinate system located on the virtual plane;
Determining coordinates of each lane line point in the target lane line in the construction coordinate system;
determining a minimum abscissa value, a maximum abscissa value, a minimum ordinate value and a maximum ordinate value in the coordinates of each lane line point in a construction coordinate system;
determining coordinates of each vertex of the bounding box in a construction coordinate system according to the minimum abscissa value, the maximum abscissa value, the minimum ordinate value and the maximum ordinate value;
and determining the bounding box according to the coordinates of each vertex.
5. The method of claim 4, wherein the determining coordinates of the point of view of the virtual camera in the virtual plane from the bounding box comprises:
determining the coordinates of the central position of the bounding box in a construction coordinate system according to the coordinates of each vertex of the bounding box in the construction coordinate system;
and determining the coordinates of the viewpoint of the virtual camera in the virtual plane according to the coordinates of the central position of the bounding box in the construction coordinate system.
6. The method of claim 5, wherein determining coordinates of the point of view of the virtual camera in the virtual plane from coordinates of the center position of the bounding box in the build coordinate system comprises:
Determining the coordinates of the central position of the bounding box in the preset coordinate system according to the coordinates of the central position of the bounding box in the construction coordinate system and the coordinate conversion relation between the construction coordinate system and the preset coordinate system;
and determining the coordinates of the central position of the bounding box in the preset coordinate system as the coordinates of the point of view of the virtual camera in the virtual plane.
7. The method of claim 4, wherein determining the distance between the virtual camera and the point of view comprises:
determining a length of the bounding box in the y-axis direction and a length of the bounding box in the x-axis direction;
if the length in the y-axis direction is greater than or equal to the length in the x-axis direction, determining a ratio of half of the length in the y-axis direction to a tangent value corresponding to a target angle, which is half of the angle of view of the virtual camera, as a distance between the virtual camera and the viewpoint;
and if the length in the y-axis direction is smaller than the length in the x-axis direction, determining the ratio of half of the length in the x-axis direction to the tangent value corresponding to the target angle as the distance between the virtual camera and the viewpoint.
8. The method of claim 6, wherein the setting the rendering parameters of the virtual camera according to the shooting direction of the virtual camera, the coordinates of the viewpoint, and the distance between the virtual camera and the viewpoint comprises:
setting rendering parameters of the virtual camera in a preset coordinate system according to the shooting direction of the virtual camera, the coordinates of the point of view and the distance between the virtual camera and the point of view, wherein the rendering parameters comprise the position of the virtual camera and the shooting direction of the virtual camera;
wherein, in the virtual space formed by the preset coordinate system, the position of the virtual camera is positioned above the virtual plane.
9. A lane line rendering device, characterized by comprising:
the determining module is used for determining the horizontal shooting direction of the virtual camera according to the target lane line to be displayed, wherein the horizontal shooting direction of the virtual camera is consistent with the lane line starting direction of the target lane line;
the determining module is further configured to determine a pitching shooting direction of the virtual camera, where the pitching shooting direction of the virtual camera is oriented to a virtual plane where the target lane line is located and is perpendicular to the virtual plane where the target lane line is located;
The determining module is further used for determining a bounding box containing the target lane line in a virtual plane where the target lane line is located;
the determining module is further used for determining coordinates of the point of view of the virtual camera in the virtual plane according to the bounding box and determining the distance between the virtual camera and the point of view;
the processing module is used for setting rendering parameters of the virtual camera according to the horizontal shooting direction and the pitching shooting direction of the virtual camera, the coordinates of the point of view and the distance between the virtual camera and the point of view so that the bounding box is positioned in the shooting area of the virtual camera;
and the rendering module is used for rendering the target lane line in a screen according to the rendering parameters of the virtual camera and the target lane line.
10. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 8.
CN202310308201.7A 2023-03-28 2023-03-28 Lane line rendering method, device and storage medium Active CN116012508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310308201.7A CN116012508B (en) 2023-03-28 2023-03-28 Lane line rendering method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310308201.7A CN116012508B (en) 2023-03-28 2023-03-28 Lane line rendering method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116012508A CN116012508A (en) 2023-04-25
CN116012508B true CN116012508B (en) 2023-06-23

Family

ID=86037719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310308201.7A Active CN116012508B (en) 2023-03-28 2023-03-28 Lane line rendering method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116012508B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845147A (en) * 2022-04-29 2022-08-02 北京奇艺世纪科技有限公司 Screen rendering method, display picture synthesis method and device and intelligent terminal
WO2023028880A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 External parameter calibration method for vehicle-mounted camera and related apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5808985B2 (en) * 2011-09-05 2015-11-10 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
CN102662566B (en) * 2012-03-21 2016-08-24 中兴通讯股份有限公司 Screen content amplification display method and terminal
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
CN111476876B (en) * 2020-04-02 2024-01-16 北京七维视觉传媒科技有限公司 Three-dimensional image rendering method, device, equipment and readable storage medium
CN111880654A (en) * 2020-07-27 2020-11-03 歌尔光学科技有限公司 Image display method and device, wearable device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023028880A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 External parameter calibration method for vehicle-mounted camera and related apparatus
CN114845147A (en) * 2022-04-29 2022-08-02 北京奇艺世纪科技有限公司 Screen rendering method, display picture synthesis method and device and intelligent terminal

Also Published As

Publication number Publication date
CN116012508A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US10970915B2 (en) Virtual viewpoint setting apparatus that sets a virtual viewpoint according to a determined common image capturing area of a plurality of image capturing apparatuses, and related setting method and storage medium
KR102342668B1 (en) Image processing apparatus, image processing method and storage medium
US8803918B2 (en) Methods and apparatus for calibrating focused plenoptic camera data
JP4262014B2 (en) Image photographing apparatus and image processing method
CN112444242B (en) Pose optimization method and device
CN106133794B (en) Information processing method, information processing apparatus, and program
KR102096730B1 (en) Image display method, method for manufacturing irregular screen having curved surface, and head-mounted display device
WO2022088103A1 (en) Image calibration method and apparatus
JP7073092B2 (en) Image processing equipment, image processing methods and programs
CN106815869B (en) Optical center determining method and device of fisheye camera
EP3016065B1 (en) Coordinate computation device and method, and image processing device and method
JPH07225855A (en) Method and device for processing image constituting target image from original image by squint conversion
CN110610531A (en) Image processing method, image processing apparatus, and recording medium
WO2020207108A1 (en) Image processing method, device and system, and robot
CN106651870A (en) Method for segmenting out-of-focus fuzzy regions of images in multi-view three-dimensional reconstruction
JP7318670B2 (en) Display method and display system
US20230328400A1 (en) Auxiliary focusing method, apparatus, and system
KR20210054708A (en) Apparatus and method for displaying around view nearby vessel
CN112529006B (en) Panoramic picture detection method, device, terminal and storage medium
JP3798922B2 (en) Photogrammetry image processing apparatus, photogrammetry image processing method, and storage medium storing photogrammetry image processing program
CN116012508B (en) Lane line rendering method, device and storage medium
CN113297344B (en) Three-dimensional remote sensing image-based ground linear matching method and device and ground object target position positioning method
JP2019144958A (en) Image processing device, image processing method, and program
US7657055B2 (en) Image processing method and image generating apparatus
CN113763477A (en) Camera point location calibration method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant