CN115326088A - Navigation image rendering method and device, electronic equipment and readable storage medium - Google Patents

Navigation image rendering method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115326088A
CN115326088A CN202211085804.7A CN202211085804A CN115326088A CN 115326088 A CN115326088 A CN 115326088A CN 202211085804 A CN202211085804 A CN 202211085804A CN 115326088 A CN115326088 A CN 115326088A
Authority
CN
China
Prior art keywords
navigated
target pixel
target
coordinate system
navigated object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211085804.7A
Other languages
Chinese (zh)
Inventor
张匡世
章启鹏
郭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202211085804.7A priority Critical patent/CN115326088A/en
Publication of CN115326088A publication Critical patent/CN115326088A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • G01C21/367Details, e.g. road map scale, orientation, zooming, illumination, level of detail, scrolling of road map or positioning of current position marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to a navigation image rendering method, an apparatus, an electronic device, and a readable storage medium, where the navigation image rendering method includes: acquiring navigated object distance information corresponding to a target pixel, wherein the target pixel corresponds to a target lane element, and the navigated object distance information is used for indicating a target distance between the target pixel and the navigated object; acquiring the transparency of a target pixel according to the distance information of the navigated object, wherein the transparency of the target pixel is negatively correlated with the target distance; and rendering the navigation image according to the transparency of the target pixel. The technical scheme can realize blurring of lane lines around the vehicle, and ensures that a user cannot know the lane position of the user by watching the navigation picture even if the accuracy of the vehicle positioning information is poor, so that the user cannot be interfered by wrong positioning information, and the user experience is improved.

Description

Navigation image rendering method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a navigation image rendering method, an apparatus, an electronic device, and a readable storage medium.
Background
In recent years, as navigation technology evolves from road-level navigation to lane-level navigation, the expression of a navigation image obtained based on navigation data on a road is richer and more refined, and a user can know the relative position relationship between a navigated object and a corresponding lane line, such as a real-time lane where the navigated object is located, or the distance between the navigated object and an adjacent lane.
However, in some scenarios, since the navigation data may not be lane-level navigation data, the accuracy of the vehicle positioning information obtained based on the navigation data may be poor in these scenarios, so that the navigation image rendered according to the vehicle positioning information may not accurately reflect the relative position relationship between the navigated object and the lane element. The inventors of the present disclosure have found that, when the accuracy of the vehicle positioning information is poor, a user is easily interfered with by wrong positioning information when viewing the navigation image to know the lane position of the user, thereby impairing the user experience.
Disclosure of Invention
In order to solve the problems in the related art, embodiments of the present disclosure provide a navigation image rendering method, apparatus, electronic device and readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a navigation image rendering method, including:
acquiring navigated object distance information corresponding to a target pixel, wherein the target pixel corresponds to a target lane element, and the navigated object distance information is used for indicating a target distance between the target pixel and the navigated object;
acquiring the transparency of a target pixel according to the distance information of the navigated object, wherein the transparency of the target pixel is negatively correlated with the target distance;
and rendering the navigation image according to the transparency of the target pixel.
In one implementation of the present disclosure, the navigated object distance information includes a square of the semi-minor axis length of the target ellipse, the center of the target ellipse coincides with the position of the navigated object, the semi-major axis of the target ellipse coincides with the direction of movement of the navigated object, and the target ellipse passes through the target pixels.
In one implementation of the present disclosure, when the target pixel is located in a first quadrant of the navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel coincides with a horizontal axis of the navigated coordinate system, the target pixel coincides with a positive semi-axis of a vertical axis of the navigated coordinate system, or the target pixel coincides with an origin of the navigated coordinate system, an eccentricity of the target ellipse is a first eccentricity;
when the target pixel is positioned in the third quadrant of the navigated coordinate system, the target pixel is positioned in the fourth quadrant of the navigated coordinate system or the target pixel is coincident with the negative axis of the longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity, and the second eccentricity is smaller than the first eccentricity;
the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system is overlapped with the position of the navigated object, and the positive half axis of the longitudinal axis of the navigated coordinate system is overlapped with the moving direction of the navigated object.
In one implementation manner of the present disclosure, before obtaining the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of a navigated object and moving direction indication information for indicating the moving direction of the navigated object;
acquiring target vertex coordinates of the corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinates, the position coordinates and the moving direction indication information;
acquiring pixel coordinates of pixels in the corresponding triangular mesh according to the target vertex coordinates;
acquiring navigated object distance information corresponding to a target pixel, comprising:
and acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates.
In one implementation manner of the present disclosure, obtaining a target vertex coordinate of a corresponding triangular mesh vertex in a navigated coordinate system according to the vertex coordinate, the position coordinate, and the moving direction indication information includes:
acquiring a navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh according to the vertex coordinates and the position coordinates;
performing vector decomposition according to the navigated object vector and the moving direction indication information to obtain base vectors corresponding to a horizontal axis and a vertical axis in a navigated coordinate system;
and acquiring the coordinates of the target vertex according to the base vector.
In one implementation manner of the present disclosure, before obtaining the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring pavement element category information corresponding to the vertexes of the triangular meshes, wherein the pavement element category information is used for indicating the categories of pavement elements corresponding to the triangular meshes where the vertexes of the triangular meshes are located;
acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, comprising:
and responding to the fact that the road surface elements corresponding to the triangular meshes where the target pixels are located are determined to be lane elements according to the road surface element category information, and obtaining the distance information of the navigated object corresponding to the target pixels according to the pixel coordinates.
In one implementation manner of the present disclosure, before acquiring the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring positioning data indication information;
acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, comprising:
in response to determining that at least one of the vertex coordinates, the position coordinates, and the moving direction indication information is acquired from the non-lane-level positioning data according to the positioning data indication information, acquiring navigated object distance information corresponding to the target pixel according to the pixel coordinates.
In a second aspect, an embodiment of the present disclosure provides a navigation image rendering apparatus, including:
a distance acquisition module configured to acquire navigated object distance information corresponding to a target pixel, the target pixel corresponding to a target lane element, the navigated object distance information indicating a target distance between the target pixel and the navigated object;
the transparency obtaining module is configured to obtain the transparency of the target pixel according to the distance information of the navigated object, and the transparency of the target pixel is in negative correlation with the target distance;
an image rendering module configured to render the navigation image according to the transparency of the target pixel.
In a third aspect, the present disclosure provides an electronic device, including a memory and a processor, where the memory is configured to store one or more computer instructions, where the one or more computer instructions are executed by the processor to implement the method according to any one of implementation manners of the first aspect and the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, implement the method as described in the first aspect, any implementation manner of the first aspect.
In the technical scheme of the disclosure, the navigation image is rendered according to the transparency of the target pixel by acquiring the navigated object distance information corresponding to the target pixel, namely the information for indicating the target distance between the target pixel corresponding to the target pixel and the navigated object, and acquiring the transparency of the target pixel corresponding to the target lane element according to the navigated object distance information, so that the transparency of the target pixel is inversely related to the target distance. In the rendered navigation image, if a certain lane element is closer to the navigated object, the transparency of the pixel for displaying the lane element is higher, and if the certain lane element is farther from the navigated object, the transparency of the pixel for displaying the lane element is lower, so that a user cannot estimate the relative position between the navigated object and the lane element based on the lane element closer to the navigated object in the navigation image.
In the technical scheme of the disclosure, by defining that the navigated object distance information includes the square of the length of the semi-short axis of the target ellipse, wherein the center of the target ellipse coincides with the position of the navigated object, the semi-long axis of the target ellipse coincides with the moving direction of the navigated object, and the target ellipse passes through the target pixel, lane elements located in front of the navigated object in the navigation image can start to gradually change to be transparent when being farther away from the navigated object, and lane elements located at two sides of the navigated object in the navigation image can start to gradually change to be transparent only when being closer to the navigated object, so that when the viewing angle of the navigation image is located above and behind the navigated object, the presentation effect of the lane elements in front of the navigated object is similar to that of the lane elements at two sides of the navigated object, which helps to maintain the consistency of the presentation effect of the lane elements in the navigation image, and improves the user experience.
In the technical scheme of the disclosure, when a target pixel is located in a first quadrant of a navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel is coincident with a horizontal axis of the navigated coordinate system, the target pixel is coincident with a positive half axis of a longitudinal axis of the navigated coordinate system or the target pixel is coincident with an origin of the navigated coordinate system, the eccentricity of a target ellipse is a first eccentricity, and when the target pixel is located in a third quadrant of the navigated coordinate system, the target pixel is located in a fourth quadrant of the navigated coordinate system or the target pixel is coincident with a negative half axis of the longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity, and the second eccentricity is smaller than the first eccentricity; the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system coincides with the position of the navigated object, the positive half axis of the longitudinal axis of the navigated coordinate system coincides with the moving direction of the navigated object, and when the viewing angle of the navigated image is located at the rear upper part of the navigated object, the presenting effect of the lane elements in front of the navigated object is similar to the presenting effect of the lane elements at the two sides of the navigated object and the lane elements at the rear of the navigated object, so that the consistency of the presenting effect of the lane elements in the navigated image is maintained, and the user experience is improved.
In the technical scheme of the disclosure, vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of a navigated object and moving direction indication information for indicating the moving direction of the navigated object are obtained; acquiring target vertex coordinates of the corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinates, the position coordinates and the moving direction indication information; acquiring pixel coordinates of pixels in the corresponding triangular mesh according to the target vertex coordinates; and acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, so that the accuracy of the acquired distance information of the navigated object can be improved.
In the technical scheme of the disclosure, the navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh is obtained according to the vertex coordinate and the position coordinate; performing vector decomposition according to the navigated object vector and the moving direction indication information to obtain base vectors corresponding to a horizontal axis and a vertical axis in a navigated coordinate system; and the target vertex coordinates are obtained according to the base vectors, so that the operation steps for obtaining the target vertex coordinates can be simplified, the operation amount is reduced, and the processing efficiency is improved.
In the technical scheme disclosed by the invention, by acquiring the road surface element category information corresponding to the vertex of the triangular mesh, the road surface element category information is used for indicating the category of the road surface element corresponding to the triangular mesh where the vertex of the triangular mesh is located, and in response to determining that the road surface element corresponding to the triangular mesh where the target pixel is located is the lane element according to the road surface element category information, the distance information of the navigated object corresponding to the target pixel is acquired according to the pixel coordinate, so that the target pixel for subsequently acquiring the transparency can be only the pixel where the corresponding road surface element is the lane element, thereby ensuring that only the content corresponding to the lane element is transparently processed in the rendered navigation image, not influencing the presentation effect of other elements in the navigation image, and ensuring that the user experience is better.
According to the technical scheme, positioning data indicating information is obtained, at least one of vertex coordinate, position coordinate and moving direction indicating information is determined to be obtained according to non-lane-level positioning data according to the positioning data indicating information, and the distance information of the navigated object corresponding to the target pixel is obtained according to the pixel coordinate, so that the lane elements in the navigation image can be subjected to transparent processing only when the reliability of at least one of the vertex coordinate, the position coordinate and the moving direction indicating information cannot support the presentation of the relative position relationship between the navigated object and the lane elements in the navigation image, and therefore a user can be guaranteed to know the accurate relative position relationship between the navigated object and the lane elements through the navigation image, and user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 shows a flowchart of a navigation image rendering method according to an embodiment of the present disclosure.
FIG. 2 shows a schematic diagram of a navigation image according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a navigation image according to an embodiment of the present disclosure.
Fig. 4 shows a flow diagram of a navigation image rendering method according to an embodiment of the present disclosure.
Fig. 5 illustrates a block diagram of a navigation image rendering apparatus according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
FIG. 7 shows a schematic block diagram of a computer system suitable for use in implementing a method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numerals, steps, actions, components, parts, or combinations thereof in the specification, and are not intended to preclude the possibility that one or more other features, numerals, steps, actions, components, parts, or combinations thereof are present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In the present disclosure, if an operation of acquiring user information or user data or an operation of presenting user information or user data to others is involved, the operations are all operations authorized, confirmed by a user, or actively selected by the user.
In recent years, as navigation technology evolves from road-level navigation to lane-level navigation, the expression of a navigation image obtained based on navigation data on a road is richer and more refined, and a user can know the relative position relationship between a navigated object and a corresponding lane line, such as a real-time lane where the navigated object is located, or the distance between the navigated object and an adjacent lane.
However, in some scenarios, since the navigation data may not be lane-level navigation data, the navigation image obtained based on the navigation data in these scenarios may not accurately reflect the relative position between the navigated object and the lane element, for example, the navigated object may be in a wrong lane in the navigation image, and thus the lane element in the navigation image, such as a lane line, may affect the determination of the user on the position of the navigated object, thereby causing interference to the behavior of the user, such as driving action, and the like, and impairing the user experience.
In one technical solution, in order to avoid interference of a lane in a navigation image with a user, when the navigation image is rendered, coordinates of vertices of triangular meshes corresponding to lane elements may be acquired, and a distance from a vertex of each triangular mesh to a position where a navigated object is located may be calculated according to the acquired coordinates, when the distance is less than or equal to a preset distance threshold, it is determined that the distance between the navigated object and the lane elements is relatively short, and a transparency of a vertex of a triangular mesh may be set according to the distance, such that the transparency is negatively related to the distance, such that in the rendered navigation image, the closer the triangular mesh corresponding to the lane elements is to the navigated object, the higher the transparency of pixels in the triangular mesh is, and the user is prevented from estimating a relative position between the navigated object and the lane elements based on the lane elements that are relatively close to the navigated object in the navigation image.
In the above scheme, although it can be ensured to some extent that the lane elements in the navigation image do not affect the determination of the position of the navigated object by the user, in some scenes (for example, in a scene with a relatively flat road surface), the triangular mesh corresponding to the lane elements is relatively large. In such a scenario, since the transparency of the pixels in the triangular mesh is obtained according to the distance from the vertex of the triangular mesh to the position of the navigated object, when the navigated object is located in the triangular mesh and the distance between the navigated object and any vertex of the triangular mesh is large, the transparency of the pixels in the triangular mesh may be low, so that the lane elements corresponding to the triangular mesh, which are closer to the navigated object in the navigation image, are not transparent, in this case, it is easy for the user to estimate the relative position between the navigated object and the lane elements based on the lane elements corresponding to the triangular mesh, and thus it still cannot be ensured that the lane elements in the navigation image do not affect the determination of the position of the navigated object by the user, and the navigation image may cause interference to the behavior of the user, such as driving action, and the like, and impair the user experience.
In order to solve the above problem, in the technical solution of the present disclosure, a navigation image is rendered according to transparency of a target pixel by acquiring navigated object distance information corresponding to the target pixel, that is, information indicating a target distance between the target pixel corresponding to the target pixel and the navigated object, and acquiring transparency of the target pixel corresponding to a target lane element according to the navigated object distance information, so that the transparency of the target pixel is inversely related to the target distance. In the rendered navigation image, if a certain lane element is closer to the navigated object, the transparency of the pixels for displaying the lane element is higher, and if the certain lane element is farther from the navigated object, the transparency of the pixels for displaying the lane element is lower, so that a user cannot estimate the relative position between the navigated object and the lane element based on the lane element closer to the navigated object in the navigation image, the determination of the position of the navigated object by the user is ensured not to be influenced by the lane element in the navigation image, the interference of the navigation image on the behavior of the user, such as driving action and the like, is avoided, and the user experience is improved.
Fig. 1 shows a flowchart of a navigation image rendering method according to an embodiment of the present disclosure. As shown in fig. 1, the navigation image rendering method includes the steps of:
in step S101, navigated object distance information corresponding to the target pixel is acquired.
The target pixel corresponds to a target lane element, and the navigated object distance information is used for indicating a target distance between the target pixel and the navigated object.
In step S102, the transparency of the target pixel is obtained according to the navigated object distance information, and the transparency of the target pixel is inversely related to the target distance.
In step S103, the navigation image is rendered according to the transparency of the target pixel.
In one embodiment of the present disclosure, the target lane element may be understood as a part of a lane marker, wherein the lane marker may be understood as a marker disposed on a road surface for indicating corresponding information of a driving line or a driving lane, wherein the lane marker may include a solid line marking, a double solid line marking, a dotted line marking, a virtual-real combination marking, a zigzag lane marking, a deceleration marking, an illusion marking, a diversion line marking, a stop line, a no-stop line, a guidance marker, a number marker, a diamond marker, an inverted triangle marker, and the like.
In one embodiment of the present disclosure, the target pixel corresponds to a target lane element, which may be understood as the target pixel for displaying the target lane element.
In one embodiment of the present disclosure, the navigated object may be understood as a mobile communication terminal corresponding to a user, and may also be understood as a vehicle driven by the user, such as a bicycle, a motorcycle, a car, a truck, a passenger car, and the like.
In one embodiment of the present disclosure, acquiring the navigated object distance information corresponding to the target pixel may be understood as reading previously stored navigated object distance information, may also be understood as acquiring from another device or system, or may be understood as calculating according to a preset algorithm to acquire the navigated object distance information corresponding to the target pixel.
In one embodiment of the present disclosure, the navigated object distance information may be understood to include a distance value for the target distance between the target pixel and the navigated object, may also be understood to include a value that is directly related to the target distance between the target pixel and the navigated object, and the like,
in one embodiment of the present disclosure, the transparency of the target pixel may be understood as reflecting the transparency or visibility of the target pixel itself. When the target pixel is completely transparent, the image content below the target pixel can be displayed through the pixel; when the target pixel is semi-transparent, the shielding condition of the target pixel on the image below the target pixel can be determined according to the transparency of the target pixel. The value range of the transparency of the pixel is 0-100%. Wherein, a transparency of 100% indicates that the pixel is completely transparent, a transparency of 0% indicates that the pixel is completely opaque, and a transparency between the two indicates that the pixel is semi-transparent.
In one embodiment of the present disclosure, the transparency of the target pixel is obtained according to the navigated object distance information, which may be understood as calculating by substituting the navigated object distance information into a transparency algorithm according to a transparency algorithm obtained in advance to obtain the transparency of the target pixel; the method can also be understood as obtaining a transparency model obtained by pre-training, taking the distance information of the navigated object as input, and inputting the transparency model to obtain the transparency of the target pixel output by the transparency model; it can also be understood that a query is made in a transparency database acquired in advance according to the navigated object distance information to acquire the transparency of the target pixel corresponding to the navigated object distance information.
In the technical scheme of the disclosure, the navigation image is rendered according to the transparency of the target pixel by acquiring the navigated object distance information corresponding to the target pixel, namely the information for indicating the target distance between the target pixel corresponding to the target pixel and the navigated object, and acquiring the transparency of the target pixel corresponding to the target lane element according to the navigated object distance information, so that the transparency of the target pixel is inversely related to the target distance. In the rendered navigation image, if a certain lane element is closer to the navigated object, the transparency of the pixel for displaying the lane element is higher, and if the certain lane element is farther from the navigated object, the transparency of the pixel for displaying the lane element is lower, so that a user cannot estimate the relative position between the navigated object and the lane element based on the lane element closer to the navigated object in the navigation image. In one implementation of the present disclosure, the navigated object distance information includes a square of the semi-minor axis length of the target ellipse, the center of the target ellipse coincides with the navigated object's position, the semi-major axis of the target ellipse coincides with the navigated object's direction of movement, and the target ellipse passes through the target pixels.
In one embodiment of the present disclosure, the target ellipse may be understood as an ellipse having an eccentricity greater than 0 and less than 1.
Illustratively, fig. 2 shows a schematic view of a navigation image according to an embodiment of the present disclosure. As shown in fig. 2, the target ellipse 201 passes through the target pixel 202, the center 211 of the target ellipse 201 coincides with the position of the navigated object 203, the semi-major axis 221 of the target ellipse 201 coincides with the direction of movement 213 of the navigated object 203, and the navigated object distance information includes the square of the semi-minor axis 231 length of the target ellipse 201.
In the technical scheme of the disclosure, by defining that the navigated object distance information includes the square of the length of the semi-short axis of the target ellipse, wherein the center of the target ellipse coincides with the position of the navigated object, the semi-long axis of the target ellipse coincides with the moving direction of the navigated object, and the target ellipse passes through the target pixel, lane elements located in front of the navigated object in the navigation image can start to gradually change to be transparent when being farther away from the navigated object, and lane elements located at two sides of the navigated object in the navigation image can start to gradually change to be transparent only when being closer to the navigated object, so that when the viewing angle of the navigation image is located above and behind the navigated object, the presentation effect of the lane elements in front of the navigated object is similar to that of the lane elements at two sides of the navigated object, which helps to maintain the consistency of the presentation effect of the lane elements in the navigation image, and improves the user experience.
In one implementation of the present disclosure, when the target pixel is located in a first quadrant of the navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel coincides with a horizontal axis of the navigated coordinate system, the target pixel coincides with a positive semi-axis of a vertical axis of the navigated coordinate system, or the target pixel coincides with an origin of the navigated coordinate system, an eccentricity of the target ellipse is a first eccentricity;
when the target pixel is located in a third quadrant of the navigated coordinate system, the target pixel is located in a fourth quadrant of the navigated coordinate system, or the target pixel is coincident with a negative semi-axis of a longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity which is less than the first eccentricity;
the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system is overlapped with the position of the navigated object, and the positive half axis of the longitudinal axis of the navigated coordinate system is overlapped with the moving direction of the navigated object.
In one embodiment of the present disclosure, the navigated coordinate system may be established by a vertex shader.
In one embodiment of the present disclosure, the first eccentricity may be 0.5 and the second eccentricity may be 0.25.
Illustratively, fig. 3 shows a schematic diagram of a navigation image according to an embodiment of the present disclosure. As shown in fig. 3, when the target pixel is located in the first quadrant 301 of the navigated coordinate system, the target pixel is located in the second quadrant 302 of the navigated coordinate system, the target pixel coincides with the horizontal axis 303 of the navigated coordinate system, the target pixel coincides with the positive half axis 304 of the vertical axis of the navigated coordinate system, or the target pixel coincides with the origin 305 of the navigated coordinate system, the eccentricity of the target ellipse 310 is the first eccentricity; when the target pixel is located in the third quadrant 306 of the navigated coordinate system, the target pixel is located in the fourth quadrant 307 of the navigated coordinate system, or the target pixel coincides with the negative semi-axis 308 of the longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse 310 is a second eccentricity, wherein the second eccentricity is less than the first eccentricity.
In the technical scheme of the disclosure, when a target pixel is located in a first quadrant of a navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel is coincident with a horizontal axis of the navigated coordinate system, the target pixel is coincident with a positive half axis of a longitudinal axis of the navigated coordinate system or the target pixel is coincident with an origin of the navigated coordinate system, the eccentricity of a target ellipse is a first eccentricity, and when the target pixel is located in a third quadrant of the navigated coordinate system, the target pixel is located in a fourth quadrant of the navigated coordinate system or the target pixel is coincident with a negative half axis of the longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity, and the second eccentricity is smaller than the first eccentricity; the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system coincides with the position of the navigated object, the positive half axis of the longitudinal axis of the navigated coordinate system coincides with the moving direction of the navigated object, and when the viewing angle of the navigated image is located at the rear upper part of the navigated object, the presenting effect of the lane elements in front of the navigated object is similar to the presenting effect of the lane elements at the two sides of the navigated object and the lane elements at the rear of the navigated object, so that the consistency of the presenting effect of the lane elements in the navigated image is maintained, and the user experience is improved.
In one implementation manner of the present disclosure, before obtaining the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of a navigated object and moving direction indication information for indicating the moving direction of the navigated object;
acquiring target vertex coordinates of the corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinates, the position coordinates and the moving direction indication information;
acquiring pixel coordinates of pixels in the corresponding triangular mesh according to the target vertex coordinates;
acquiring navigated object distance information corresponding to a target pixel, comprising:
and acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates.
In one embodiment of the present disclosure, the triangular mesh corresponding to the road surface element may be understood as a triangular mesh for displaying the corresponding road surface element.
In one embodiment of the present disclosure, the vertex coordinates and the position coordinates may be understood as being obtained from rendering data, and the moving direction indication information may be understood as being obtained from positioning data.
In one embodiment of the present disclosure, the vertex coordinates and the position coordinates of the navigated object may be understood as coordinates in the same coordinate system, which may be, for example, a rendering coordinate system.
In one embodiment of the present disclosure, acquiring the vertex coordinates, the position coordinates, and the moving direction indication information may be understood as reading the vertex coordinates, the position coordinates, and the moving direction indication information stored in advance, or may be understood as acquiring positioning data and processing the positioning data to acquire the vertex coordinates, the position coordinates, and the moving direction indication information. Wherein, the positioning data can include non-lane-level positioning data, wherein the non-lane-level positioning data can be understood as positioning data other than lane-level positioning data, and the lane-level positioning data can include real-time kinematic (RTK) positioning data and visual lane-level positioning data, and the visual lane-level positioning data can be obtained by processing a picture acquired by a camera on the navigation object.
In one embodiment of the present disclosure, the target vertex coordinates of the corresponding vertex of the triangular mesh in the navigated coordinate system are obtained according to the vertex coordinates, the position coordinates, and the movement direction instruction information, and it may be understood that the vertex coordinates, the position coordinates, and the movement direction instruction information are substituted for performing an operation according to a preset algorithm to obtain the target vertex coordinates of the corresponding vertex of the triangular mesh in the navigated coordinate system. It is also understood that the vertex coordinates, the position coordinates, and the moving direction indication information are transmitted, and the target vertex coordinates of the corresponding triangular mesh vertices in the navigated coordinate system transmitted by other devices or systems are received. For example, the vertex coordinates, the position coordinates, and the movement direction indication information may be transmitted to a vertex shader, a navigated coordinate system is established by the vertex shader, and target vertex coordinates of a corresponding triangular mesh vertex output by the vertex shader in the navigated coordinate system are obtained.
In one embodiment of the present disclosure, the pixel coordinates of the pixels in the corresponding triangular mesh are obtained according to the target vertex coordinates, which may be understood as performing an operation by substituting the target vertex coordinates according to a preset algorithm to obtain the pixel coordinates of the pixels in the corresponding triangular mesh; it is also understood that the target vertex coordinates are transmitted and the pixel coordinates of the pixels in the corresponding triangular mesh transmitted by other devices or systems are received. For example, the target vertex coordinates may be passed into the pixel shader to obtain pixel coordinates of pixels in a corresponding triangular mesh output by the pixel shader.
In the technical scheme of the disclosure, vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of a navigated object and moving direction indication information for indicating the moving direction of the navigated object are obtained; acquiring target vertex coordinates of the corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinates, the position coordinates and the moving direction indication information; acquiring pixel coordinates of pixels in the corresponding triangular mesh according to the target vertex coordinates; the distance information of the navigated object corresponding to the target pixel is obtained according to the pixel coordinates, so that the accuracy of the obtained distance information of the navigated object can be improved.
In one implementation manner of the present disclosure, obtaining target vertex coordinates of a corresponding triangular mesh vertex in a navigated coordinate system according to the vertex coordinates, the position coordinates, and the moving direction indication information includes:
acquiring a navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh according to the vertex coordinates and the position coordinates;
performing vector decomposition according to the navigated object vector and the moving direction indication information to obtain base vectors corresponding to a horizontal axis and a vertical axis in a navigated coordinate system;
and acquiring the coordinates of the target vertex according to the base vector.
In the technical scheme of the disclosure, the navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh is obtained according to the vertex coordinate and the position coordinate; performing vector decomposition according to the navigated object vector and the moving direction indication information to obtain base vectors corresponding to a horizontal axis and a vertical axis in a navigated coordinate system; and the target vertex coordinates are obtained according to the base vectors, so that the operation steps for obtaining the target vertex coordinates can be simplified, the operation amount is reduced, and the processing efficiency is improved.
In one implementation manner of the present disclosure, before obtaining the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring pavement element category information corresponding to the vertexes of the triangular meshes, wherein the pavement element category information is used for indicating the categories of pavement elements corresponding to the triangular meshes where the vertexes of the triangular meshes are located;
acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, wherein the method comprises the following steps:
and responding to the fact that the road surface elements corresponding to the triangular meshes where the target pixels are located are determined to be lane elements according to the road surface element category information, and obtaining the distance information of the navigated object corresponding to the target pixels according to the pixel coordinates.
In one embodiment of the present disclosure, the category of the road surface element may be understood as being used for indicating whether the road surface element is a lane element, i.e. a lane marker, or may be understood as being specifically used for indicating the category of the road surface element, wherein the category of the road surface element may include a lane marker, a pedestrian, a vehicle, a tree, a building, a traffic light, a road sign, and the like.
In one embodiment of the present disclosure, acquiring the road surface element type information corresponding to the vertices of the triangular mesh may be understood as reading the road surface element type information corresponding to the vertices of the triangular mesh stored in advance, or may be understood as acquiring the road surface element type information corresponding to the vertices of the triangular mesh from another device or system.
In one embodiment of the present disclosure, in response to determining that the road surface element corresponding to the triangular mesh where the target pixel is located is a lane element according to the road surface element category information, the pixel shader may obtain the navigated object distance information corresponding to the target pixel according to the pixel coordinates.
In the technical scheme, the pavement element category information corresponding to the triangular mesh vertexes is obtained, the pavement element category information is used for indicating the categories of the pavement elements corresponding to the triangular mesh where the triangular mesh vertexes are located, and in response to the fact that the pavement elements corresponding to the triangular mesh where the target pixel is located are determined to be lane elements according to the pavement element category information, the navigated object distance information corresponding to the target pixel is obtained according to the pixel coordinates, so that the target pixel for subsequently obtaining the transparency can be only the pixel where the corresponding pavement element is the lane element, and therefore the fact that only the content corresponding to the lane element is subjected to transparency processing in the rendered navigation image is ensured, the presenting effect of other elements in the navigation image is not influenced, and the user experience is better ensured.
In one implementation manner of the present disclosure, before acquiring the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring positioning data indication information;
acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, comprising:
in response to determining that at least one of the vertex coordinates, the position coordinates, and the movement direction indicating information is acquired from the non-lane-level positioning data according to the positioning data indicating information, acquiring navigated object distance information corresponding to the target pixel according to the pixel coordinates.
In an embodiment of the present disclosure, acquiring the positioning data indication information may be understood as reading the positioning data indication information stored in advance, or may be understood as acquiring the positioning data indication information from another device or system.
In one embodiment of the present disclosure, the navigated object distance information corresponding to the target pixel may be obtained from the pixel coordinates in response to determining from the positioning data indication that at least one of the vertex coordinates, the position coordinates, and the movement direction indication information was obtained from the non-lane level positioning data.
According to the technical scheme, positioning data indicating information is obtained, at least one item of vertex coordinate, position coordinate and moving direction indicating information is determined to be obtained according to non-lane-level positioning data according to the positioning data indicating information, and the distance information of the navigated object corresponding to the target pixel is obtained according to the pixel coordinate, so that the lane elements in the navigation image can be subjected to transparent processing only when the reliability of at least one item of the vertex coordinate, the position coordinate and the moving direction indicating information cannot support the condition that the relative position relation between the navigated object and the lane elements is presented in the navigation image, and therefore a user can be guaranteed to know the relatively accurate relative position relation between the navigated object and the lane elements through the navigation image, and user experience is improved.
Fig. 4 shows a flow diagram of a navigation image rendering method according to an embodiment of the present disclosure. As shown in fig. 4, the navigation image rendering method includes the steps of:
in step S401, vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of a navigated object, and road surface element category information corresponding to the triangular mesh vertex are acquired according to the rendering data;
in step S402, moving direction indicating information for indicating the moving direction of the object to be navigated is acquired from the positioning data.
In step S403, the vertex coordinates, the road surface element category information, the position coordinates, and the moving direction indication information are input into a vertex shader, and a navigation coordinate system is established by the vertex shader.
In step S404, the vertex shader acquires the navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh according to the vertex coordinates and the position coordinates, performs vector decomposition according to the navigated object vector and the movement direction indication information to acquire base vectors corresponding to the horizontal axis and the vertical axis in the navigated coordinate system, and acquires the target vertex coordinates according to the base vectors.
In step S405, the vertex shader transmits the road surface element class information and the target vertex coordinates to the pixel shader.
In step S406, positioning data indication information is obtained by the pixel shader.
In step S407, the pixel shader determines whether at least one of the vertex coordinates, the position coordinates, and the moving direction indication information is obtained according to the non-lane-level positioning data according to the positioning data indication information
In step S408, in response to determining that at least one of the vertex coordinates, the position coordinates, and the moving direction indication information is obtained according to the non-lane-level positioning data according to the positioning data indication information, the pixel shader determines whether the road surface element corresponding to the triangular mesh where the target pixel is located is a lane element according to the road surface element category information.
In step S409, in response to determining that the road surface element corresponding to the triangular mesh where the target pixel is located is a lane element according to the road surface element category information, the pixel coordinates of the pixels in the corresponding triangular mesh are obtained according to the target vertex coordinates, and the navigated object distance information corresponding to the target pixel is obtained according to the pixel coordinates.
The distance information of the navigated object comprises the square of the length of the semi-minor axis of the target ellipse, the center of the target ellipse is superposed with the position of the navigated object, the semi-major axis of the target ellipse is superposed with the moving direction of the navigated object, and the target ellipse passes through the target pixel
When the target pixel is located in a first quadrant of the navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel is overlapped with a horizontal axis of the navigated coordinate system, the target pixel is overlapped with a positive half axis of a vertical axis of the navigated coordinate system or the target pixel is overlapped with an origin of the navigated coordinate system, the eccentricity of the target ellipse is a first eccentricity;
when the target pixel is positioned in the third quadrant of the navigated coordinate system, the target pixel is positioned in the fourth quadrant of the navigated coordinate system or the target pixel is coincident with the negative axis of the longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity, and the second eccentricity is smaller than the first eccentricity;
the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system is superposed with the position of the navigated object, and the positive half axis of the vertical axis of the navigated coordinate system is superposed with the moving direction of the navigated object.
In step S410, a linear transformation is performed according to the square of the length of the semi-minor axis of the target ellipse in the navigated object distance information to obtain the transparency of the target pixel.
In step S411, the navigation image is rendered according to the transparency of the target pixel.
Fig. 5 illustrates a block diagram of a navigation image rendering apparatus according to an embodiment of the present disclosure. The apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of both.
As shown in fig. 5, the navigation image rendering apparatus 500 includes:
a distance obtaining module 501 configured to obtain navigated object distance information corresponding to a target pixel, the target pixel corresponding to a target lane element, the navigated object distance information indicating a target distance between the target pixel and the navigated object;
a transparency obtaining module 502 configured to obtain the transparency of the target pixel according to the navigated object distance information, the transparency of the target pixel being inversely related to the target distance;
an image rendering module 503 configured to render the navigation image according to the transparency of the target pixel.
In the technical scheme of the disclosure, the navigation image is rendered according to the transparency of the target pixel by acquiring the navigated object distance information corresponding to the target pixel, namely the information for indicating the target distance between the target pixel corresponding to the target pixel and the navigated object, and acquiring the transparency of the target pixel corresponding to the target lane element according to the navigated object distance information, so that the transparency of the target pixel is inversely related to the target distance. In the rendered navigation image, if a certain lane element is closer to the navigated object, the transparency of the pixel for displaying the lane element is higher, and if the certain lane element is farther from the navigated object, the transparency of the pixel for displaying the lane element is lower, so that a user cannot estimate the relative position between the navigated object and the lane element based on the lane element closer to the navigated object in the navigation image.
The present disclosure also discloses an electronic device, and fig. 6 shows a block diagram of the electronic device according to an embodiment of the present disclosure.
As shown in fig. 6, the electronic device includes a memory and a processor, where the memory is to store one or more computer instructions, where the one or more computer instructions are executed by the processor to implement a method according to an embodiment of the disclosure.
The embodiment of the disclosure provides a navigation image rendering method, which comprises the following steps:
acquiring navigated object distance information corresponding to a target pixel, wherein the target pixel corresponds to a target lane element, and the navigated object distance information is used for indicating a target distance between the target pixel and the navigated object;
acquiring the transparency of a target pixel according to the distance information of the navigated object, wherein the transparency of the target pixel is negatively correlated with the target distance;
and rendering the navigation image according to the transparency of the target pixel.
In one implementation of the present disclosure, the navigated object distance information includes a square of the semi-minor axis length of the target ellipse, the center of the target ellipse coincides with the position of the navigated object, the semi-major axis of the target ellipse coincides with the direction of movement of the navigated object, and the target ellipse passes through the target pixels.
In one implementation of the present disclosure, when the target pixel is located in a first quadrant of the navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel coincides with a horizontal axis of the navigated coordinate system, the target pixel coincides with a positive semi-axis of a vertical axis of the navigated coordinate system, or the target pixel coincides with an origin of the navigated coordinate system, the eccentricity of the target ellipse is a first eccentricity;
when the target pixel is positioned in the third quadrant of the navigated coordinate system, the target pixel is positioned in the fourth quadrant of the navigated coordinate system or the target pixel is coincident with the negative axis of the longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity, and the second eccentricity is smaller than the first eccentricity;
the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system is overlapped with the position of the navigated object, and the positive half axis of the longitudinal axis of the navigated coordinate system is overlapped with the moving direction of the navigated object.
In one implementation manner of the present disclosure, before acquiring the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of a navigated object and moving direction indication information for indicating the moving direction of the navigated object;
acquiring target vertex coordinates of the corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinates, the position coordinates and the moving direction indication information;
acquiring pixel coordinates of pixels in the corresponding triangular mesh according to the target vertex coordinates;
acquiring navigated object distance information corresponding to a target pixel, comprising:
and acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates.
In one implementation manner of the present disclosure, obtaining target vertex coordinates of a corresponding triangular mesh vertex in a navigated coordinate system according to the vertex coordinates, the position coordinates, and the moving direction indication information includes:
acquiring a navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh according to the vertex coordinates and the position coordinates;
performing vector decomposition according to the navigated object vector and the moving direction indication information to obtain base vectors corresponding to a horizontal axis and a vertical axis in a navigated coordinate system;
and acquiring the coordinates of the target vertex according to the base vector.
In one implementation manner of the present disclosure, before obtaining the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring pavement element category information corresponding to the vertexes of the triangular meshes, wherein the pavement element category information is used for indicating the categories of pavement elements corresponding to the triangular meshes where the vertexes of the triangular meshes are located;
acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, comprising:
and responding to the fact that the road surface elements corresponding to the triangular meshes where the target pixels are located are determined to be lane elements according to the road surface element category information, and obtaining the distance information of the navigated object corresponding to the target pixels according to the pixel coordinates.
In one implementation manner of the present disclosure, before acquiring the navigated object distance information corresponding to the target pixel, the method further includes:
acquiring positioning data indication information;
acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates, comprising:
in response to determining that at least one of the vertex coordinates, the position coordinates, and the movement direction indicating information is acquired from the non-lane-level positioning data according to the positioning data indicating information, acquiring navigated object distance information corresponding to the target pixel according to the pixel coordinates.
FIG. 7 shows a schematic block diagram of a computer system suitable for use in implementing a method according to an embodiment of the present disclosure.
As shown in fig. 7, the computer system includes a processing unit that can execute the various methods in the above-described embodiments according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage section into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the computer system are also stored. The processing unit, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
The following components are connected to the I/O interface: an input section including a keyboard, a mouse, and the like; an output section including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The communication section performs a communication process via a network such as the internet. The drive is also connected to the I/O interface as needed. A removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive as necessary, so that a computer program read out therefrom is mounted into the storage section as necessary. The processing unit can be realized as a CPU, a GPU, a TPU, an FPGA, an NPU and other processing units.
In particular, the above described methods may be implemented as computer software programs according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the above-described method. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or by programmable hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation on the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be a computer-readable storage medium included in the electronic device or the computer system in the above embodiments; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A navigation image rendering method, comprising:
acquiring navigated object distance information corresponding to a target pixel, wherein the target pixel corresponds to a target lane element, and the navigated object distance information is used for indicating a target distance between the target pixel and the navigated object;
acquiring the transparency of the target pixel according to the distance information of the navigated object, wherein the transparency of the target pixel is in negative correlation with the target distance;
and rendering a navigation image according to the transparency of the target pixel.
2. The navigation image rendering method according to claim 1, wherein the navigated object distance information includes a square of a semi-minor axis length of a target ellipse, a center of the target ellipse coincides with the position of the navigated object, a semi-major axis of the target ellipse coincides with the direction of movement of the navigated object, and the target ellipse passes through the target pixel.
3. The navigation image rendering method according to claim 2, wherein the eccentricity of the target ellipse is a first eccentricity when the target pixel is located in a first quadrant of a navigated coordinate system, the target pixel is located in a second quadrant of the navigated coordinate system, the target pixel coincides with a horizontal axis of the navigated coordinate system, the target pixel coincides with a positive half axis of a vertical axis of the navigated coordinate system, or the target pixel coincides with an origin of the navigated coordinate system;
when the target pixel is located in a third quadrant of the navigated coordinate system, the target pixel is located in a fourth quadrant of the navigated coordinate system, or the target pixel coincides with a negative semi-axis of a longitudinal axis of the navigated coordinate system, the eccentricity of the target ellipse is a second eccentricity that is less than the first eccentricity;
the navigated coordinate system is a plane rectangular coordinate system, the origin of the navigated coordinate system is coincident with the position of the navigated object, and the positive semi-axis of the longitudinal axis of the navigated coordinate system is coincident with the moving direction of the navigated object.
4. The navigation image rendering method according to any one of claims 1 to 3, wherein before the acquiring the navigated object distance information corresponding to the target pixel, the method further comprises:
acquiring vertex coordinates of a triangular mesh vertex corresponding to at least one road surface element, position coordinates of the navigated object and moving direction indication information for indicating the moving direction of the navigated object;
acquiring a target vertex coordinate of a corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinate, the position coordinate and the moving direction indication information;
acquiring pixel coordinates of pixels in the corresponding triangular mesh according to the target vertex coordinates;
the acquiring the navigated object distance information corresponding to the target pixel comprises:
and acquiring the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates.
5. The navigation image rendering method according to claim 4, wherein the obtaining of the target vertex coordinates of the corresponding triangular mesh vertex in the navigated coordinate system according to the vertex coordinates, the position coordinates, and the movement direction indication information comprises:
acquiring a navigated object vector from the position of the navigated object to the vertex of the corresponding triangular mesh according to the vertex coordinates and the position coordinates;
performing vector decomposition according to the navigated object vector and the moving direction indication information to obtain base vectors corresponding to a horizontal axis and a vertical axis in the navigated coordinate system;
and acquiring the coordinates of the target vertex according to the base vector.
6. The navigation image rendering method according to claim 4, wherein before the acquiring the navigated object distance information corresponding to the target pixel, the method further comprises:
acquiring pavement element category information corresponding to the vertexes of the triangular meshes, wherein the pavement element category information is used for indicating the categories of pavement elements corresponding to the triangular meshes where the vertexes of the triangular meshes are located;
the obtaining of the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates includes:
and responding to the fact that the road surface element corresponding to the triangular mesh where the target pixel is located is determined to be a lane element according to the road surface element category information, and obtaining the distance information of the navigated object corresponding to the target pixel according to the pixel coordinate.
7. The navigation image rendering method according to claim 4, wherein before the acquiring the navigated object distance information corresponding to the target pixel, the method further comprises:
acquiring positioning data indication information;
the obtaining of the distance information of the navigated object corresponding to the target pixel according to the pixel coordinates includes:
in response to determining that at least one of the vertex coordinates, the position coordinates, and the movement direction indication information is acquired according to non-lane-level positioning data according to the positioning data indication information, acquiring navigated object distance information corresponding to a target pixel according to the pixel coordinates.
8. A navigation image rendering apparatus, comprising:
a distance acquisition module configured to acquire navigated object distance information corresponding to a target pixel, the target pixel corresponding to a target lane element, the navigated object distance information indicating a target distance between the target pixel and a navigated object;
a transparency obtaining module configured to obtain transparency of the target pixel according to the navigated object distance information, the transparency of the target pixel being inversely related to the target distance;
an image rendering module configured to render a navigation image according to the transparency of the target pixel.
9. An electronic device, comprising a memory and a processor; the memory for storing one or more computer instructions for execution by the processor to perform the method steps of any of claims 1-7.
10. A computer readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the method steps of any one of claims 1-7.
CN202211085804.7A 2022-09-06 2022-09-06 Navigation image rendering method and device, electronic equipment and readable storage medium Pending CN115326088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211085804.7A CN115326088A (en) 2022-09-06 2022-09-06 Navigation image rendering method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211085804.7A CN115326088A (en) 2022-09-06 2022-09-06 Navigation image rendering method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115326088A true CN115326088A (en) 2022-11-11

Family

ID=83929975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211085804.7A Pending CN115326088A (en) 2022-09-06 2022-09-06 Navigation image rendering method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115326088A (en)

Similar Documents

Publication Publication Date Title
JP6580800B2 (en) Accelerated light field display
US10282915B1 (en) Superimposition device of virtual guiding indication and reality image and the superimposition method thereof
CN108474666B (en) System and method for locating a user in a map display
US10347046B2 (en) Augmented reality transportation notification system
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
US9135754B2 (en) Method to generate virtual display surfaces from video imagery of road based scenery
US8970583B1 (en) Image space stylization of level of detail artifacts in a real-time rendering engine
CN109961522B (en) Image projection method, device, equipment and storage medium
US20140285523A1 (en) Method for Integrating Virtual Object into Vehicle Displays
US20140375638A1 (en) Map display device
CN110047105B (en) Information processing apparatus, information processing method, and storage medium
JPH1165431A (en) Device and system for car navigation with scenery label
US11704883B2 (en) Methods and systems for reprojection in augmented-reality displays
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN101122464A (en) GPS navigation system road display method, device and apparatus
EP3811326B1 (en) Heads up display (hud) content control system and methodologies
US9846819B2 (en) Map image display device, navigation device, and map image display method
US20130120373A1 (en) Object distribution range setting device and object distribution range setting method
CN109115238B (en) Map display method, device and equipment
CN115326088A (en) Navigation image rendering method and device, electronic equipment and readable storage medium
CN114689063A (en) Map modeling and navigation guiding method, electronic device and computer program product
CN113536854A (en) High-precision map guideboard generation method and device and server
US11636658B1 (en) Dynamic augmented reality overlay display
CN115097628B (en) Driving information display method, device and system
CN115665400B (en) Augmented reality head-up display imaging method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination