CN115406462A - Navigation and live-action fusion method and device, electronic equipment and storage medium - Google Patents

Navigation and live-action fusion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115406462A
CN115406462A CN202211055929.5A CN202211055929A CN115406462A CN 115406462 A CN115406462 A CN 115406462A CN 202211055929 A CN202211055929 A CN 202211055929A CN 115406462 A CN115406462 A CN 115406462A
Authority
CN
China
Prior art keywords
navigation
vehicle
path
current position
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211055929.5A
Other languages
Chinese (zh)
Inventor
周金澄
陈斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202211055929.5A priority Critical patent/CN115406462A/en
Publication of CN115406462A publication Critical patent/CN115406462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3635Guidance using 3D or perspective road maps
    • G01C21/3638Guidance using 3D or perspective road maps including 3D objects and buildings
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention provides a navigation and live-action fusion method and device, electronic equipment and a storage medium, wherein the navigation and live-action fusion method comprises the following steps: acquiring the current position and the driving direction of a vehicle; determining navigation data based on the current position and the driving direction of the vehicle; determining a navigation path according to the navigation data, and calculating the position coordinate of the navigation path under a vehicle body coordinate system; generating a navigation guide image according to the current position of the vehicle, the human eye movement track of a driver in the vehicle and the position coordinates of the navigation path; and projecting the navigation guide image to a vehicle windshield to realize navigation and live-action fusion. The invention displays the virtual image information in front of the automobile, so that a driver can observe the navigation information and the actual road condition at the same time, the problem of larger fusion deviation of the navigation information and the actual road condition is solved, and the fitting degree of the navigation information and the actual road condition is improved.

Description

Navigation and live-action fusion method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of navigation technologies, and in particular, to a navigation and live-action fusion method and apparatus, an electronic device, and a computer-readable storage medium.
Background
An Augmented Reality Head-Up Display (AR-HUD) is a virtual Display system, uses the windshield of an automobile as a projection medium, and uses the principle of optical reflection to project some navigation information in the sight line area of a driver. The virtual image information is displayed in front of the automobile, and the information can be acquired while keeping the sight line of the driver unchanged, so that the frequency of the driver looking at the instrument head down is reduced, and the attention interruption is avoided.
However, in the prior art, when displaying navigation information based on AR-HUD, the navigation information and the actual road have a large fusion deviation or are not attached to the actual road condition, and the route with different gradient and rotation degree cannot be calculated, so that the driver cannot obtain accurate driving direction guiding information, which affects driving safety.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a navigation and live-action fusion method and apparatus, an electronic device, and a storage medium, so as to solve the above-mentioned technical problems.
The navigation and live-action fusion method provided by the invention obtains the current position and the driving direction of the vehicle; determining navigation data based on the current position and the driving direction of the vehicle; determining a navigation path according to the navigation data, and calculating the position coordinate of the navigation path under a vehicle body coordinate system; generating a navigation guide image according to the current position of the vehicle, the movement track of human eyes of a driver in the vehicle and the position coordinates of the navigation path; and projecting the navigation guide image to a vehicle windshield to realize navigation and live-action fusion.
In an embodiment of the present invention, the collecting the current position and the driving direction of the vehicle includes: acquiring positioning data of a vehicle to determine the current position of the vehicle and the running direction of the vehicle; and obtaining the position of the vehicle at each moment according to the positioning data refreshing frequency and the real-time vehicle speed information.
In an embodiment of the present invention, the acquiring navigation data based on the current position and the driving direction of the vehicle includes: acquiring front road section data corresponding to the current position of the vehicle from a map, and determining a road section course angle of a shape point of a road section in front of the vehicle, wherein the front road section data comprises a slope and a rotation degree; and acquiring the vehicle course angle of the vehicle according to the current position of the vehicle and the turning angle of the steering wheel.
In an embodiment of the present invention, determining a navigation path according to the navigation data, and calculating position coordinates of the navigation path in a vehicle body coordinate system includes: converting the coordinates of the path planning discrete points of the front road section corresponding to the current position of the vehicle into path points; screening the path points to leave effective path points; forming a smooth curve according to the positions of the effective path points and the distances between the adjacent path points; after the effective path points are fitted into a plurality of smooth curve segments, connecting the smooth curve segments to form a path planning curve; and mapping the current position of the vehicle to the path planning curve.
In an embodiment of the present invention, generating the navigation guidance image according to the current position of the vehicle, the track of movement of human eyes of the driver in the vehicle, and the position coordinates of the navigation path includes: collecting human eye motion images through a human eye detection camera; judging three-dimensional coordinates of human eyes according to a human eye motion detection software algorithm to generate a human eye motion track; and generating a navigation guide image by combining the current position of the vehicle, the motion track of the human eyes and the position coordinates of the navigation path.
In an embodiment of the present invention, projecting the navigation guidance image to a windshield of a vehicle to achieve navigation and live-action fusion includes: transmitting the navigation guidance image to an AR-HUD display device; the AR-HUD display device projects the light to the front windshield of the vehicle to form a HUD light path; the virtual image that HUD light path produced projects vehicle windshield, fuses with the actual road.
The invention provides a navigation and live-action fusion device, which comprises: the positioning module is used for acquiring the current position and the running direction of the vehicle; the navigation map module is used for acquiring navigation data based on the current position and the running direction of the vehicle; the image processing module is used for calculating the position coordinates of the navigation path under the vehicle body coordinates according to the navigation data; the image fusion module is used for generating a navigation guide image according to the current position of the vehicle, the human eye movement track of a driver in the vehicle and the position coordinates of the navigation path; and the projection module is used for projecting the navigation guide image to a vehicle windshield so as to realize navigation and real scene fusion.
In an embodiment of the present invention, the positioning module, the navigation map module, the image processing module, the image fusion module and the projection module are communicatively connected.
The invention provides an electronic device, comprising: one or more processors; a storage device configured to store one or more programs that, when executed by the one or more processors, cause the electronic device to implement any of the navigation and reality fusion methods described above.
The invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor of a computer, causes the computer to execute any one of the navigation and live-action fusion methods described above.
The invention has the beneficial effects that: the invention provides a navigation and live-action fusion method, which is characterized in that a map module and a positioning module are utilized to accurately acquire road information and vehicle information in front of a vehicle to generate a navigation path, a navigation guide image is generated by combining the current position of the vehicle and the motion track of human eyes, and the navigation guide image is projected to a windshield, so that the fusion of navigation and live-action is realized, the frequency of a driver looking at an instrument at a low head is reduced, the problem of large fusion deviation between the navigation information and the actual road is solved, and the driving safety is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic diagram of an implementation environment for navigation and live-action fusion according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating a navigation and live action fusion method according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart of step S210 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 4 is a flow chart of step S220 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 5 is a flow chart of step S230 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 6 is a flow chart of step S240 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 7 is a flow chart of step S250 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 8 is a block diagram of a navigation and live action fusion apparatus according to an exemplary embodiment of the present application;
FIG. 9 is a schematic structural diagram of a navigation and scene fusion apparatus according to an exemplary embodiment of the present application;
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present invention are described in detail with reference to the accompanying drawings and preferred embodiments. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
It should be noted that navigation is a technology of a critical path to a destination, and is a process of monitoring and controlling an object such as a vehicle or a pedestrian to move from one place to another place. The navigation field is generally divided into four fields, namely land navigation, marine navigation, aviation navigation and space navigation, and the embodiment of the application relates to the land navigation and is used for monitoring the position change process of a vehicle moving from one place to another place according to a planned path.
The navigation process can refresh the road condition information at a specific frequency so as to ensure the user experience. Taking the application scene of the vehicle driving path navigation as an example, because the vehicle driving speed is high, the road condition of the vehicle driving road can be refreshed in the second level in the navigation process, for example, the residual time of the vehicle reaching the destination is usually displayed by taking the second as the minimum unit, so the road condition of the vehicle driving road can be refreshed every 1 second in the navigation process, and the latest road condition can be timely and synchronously checked by a driver. In other application scenarios, the frequency of refreshing the road condition of the traveling road of the vehicle may be set according to an actual situation, which is not limited in the embodiments of the present application.
Fig. 1 is a schematic diagram of an implementation environment of a navigation process according to an exemplary embodiment of the present application. As shown in fig. 1, in the driving process of the vehicle, navigation is implemented through navigation map software installed on the intelligent terminal 110, the navigation map software refreshes road conditions at the level of seconds, that is, a network request is made to the navigation server 120 according to a domain name of the navigation server 120 every second, then the navigation server 120 returns corresponding road condition data to the navigation map software, the road condition data contains remaining time when the vehicle reaches a destination, the navigation map software performs data analysis on the road condition data to obtain information of the remaining time, and packages and displays the obtained remaining time on a navigation interface.
The intelligent terminal 110 shown in fig. 1 may be any terminal device supporting installation of navigation map software, such as a smart phone, a vehicle-mounted computer, a tablet computer, a notebook computer, or a wearable device, but is not limited thereto. The navigation server 120 shown in fig. 1 is a navigation server, and may be, for example, an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), and a big data and artificial intelligence platform, which is not limited herein. The intelligent terminal 110 may communicate with the navigation server 120 through a wireless network such as 3G (third generation mobile information technology), 4G (fourth generation mobile information technology), 5G (fifth generation mobile information technology), and the like, which is not limited herein.
Fig. 2 is a flowchart illustrating a navigation and live-action fusion method according to an exemplary embodiment of the present application. The method may be applied to the implementation environment shown in fig. 1 and specifically executed by the intelligent terminal 110 in the implementation environment. It should be understood that the method may be applied to other exemplary implementation environments and is specifically executed by devices in other implementation environments, and the embodiment does not limit the implementation environment to which the method is applied.
As shown in fig. 2, in an exemplary embodiment, the navigation and live-action fusion method at least includes steps S210 to S250, which are described in detail as follows:
step S210, a current position and a driving direction of the vehicle are acquired.
The current position of the vehicle on the navigation path may be obtained by a Positioning module installed in the navigation vehicle, for example, the real-time position of the navigation vehicle is obtained by a Global Positioning System (GPS) Positioning module installed in the navigation vehicle, or the real-time position of the navigation vehicle may be obtained by a Positioning module that uses other Positioning technologies to realize Positioning, such as a Location Based Service (LBS), and the like, which is not limited herein.
Step S220, determining navigation data based on the current position and the driving direction of the vehicle.
The step is to acquire the navigation data of the road in front of the vehicle, so as to be convenient for processing the navigation data of the next step. The navigation data is acquired from the high-precision map and is used for solving the problem that the fusion deviation of the indication information and the actual road is large or the actual road condition is not attached in the traditional technology. The navigation data further includes absolute coordinates of road surface element information that the vehicle passes through in the process of reaching the target position from the starting position, where the road surface element information includes, but is not limited to, road lamps on the roadside and longitude and latitude coordinates of each point on a lane line on the road surface.
And step S230, determining a navigation path according to the navigation data, and calculating the position coordinates of the navigation path in a vehicle body coordinate system.
The navigation path is a route between a navigation starting point and a navigation end point, and is obtained by planning the navigation path according to the navigation starting point and the navigation end point.
And step S240, generating a navigation guide image according to the current position of the vehicle, the human eye movement track of the driver in the vehicle and the position coordinates of the navigation path.
In this embodiment, the eyes of the driver move continuously during the driving process of the vehicle, the sight lines of the driver are different, and the images formed in the eyeballs of the driver are also different, so that the eyeballs of the driver are tracked, the images in the eyeballs of the driver are obtained, the sight line direction of the driver is identified according to the eyeball images, and the eye movement track of the driver is generated. And the projection of the navigation picture is continuously adjusted according to the movement track of human eyes so as to provide an optimal real-time picture for a driver, for example, a human eye detection camera can be arranged in a vehicle to track the eyeballs of the driver.
And step S250, projecting the navigation guidance image to a vehicle windshield so as to realize navigation and real scene fusion.
After the navigation guide image is generated, the navigation guide image is projected onto a windshield, and fusion of the navigation guide image and actual road conditions is achieved. The navigation map is displayed on the front windshield of the vehicle, so that a driver can determine a driving route according to the navigation map on the front windshield, distraction is not easy to cause, and traffic accidents are reduced. And the navigation guide image formed on the front windshield overlaps the virtual indication information with the object displayed on the front windshield of the vehicle, so that the driver can clearly see the driving route under the indication information, the driving route is prevented from being seen wrongly, and the driving experience is improved.
Fig. 3 is a flow chart of step S210 in the embodiment shown in fig. 2 in an exemplary embodiment. As shown in fig. 3, the process of acquiring the current position and the driving direction of the vehicle may include steps S310 to S320, which are described in detail as follows:
step S310, acquiring high-precision positioning data of the vehicle to determine the current position of the vehicle and the running direction of the vehicle.
As described above, the real-time positioning information of the navigation vehicle may be obtained by a positioning module installed in the navigation vehicle, for example, a GPS module installed in the navigation vehicle may be used to obtain a real-time GPS coordinate of the navigation vehicle, and the GPS coordinate may be used as the real-time positioning information of the navigation vehicle.
And step S320, calculating and obtaining the vehicle position at each moment according to the positioning data refreshing frequency and the real-time vehicle speed information.
The navigation process can refresh specific frequency aiming at road condition information, and the three-dimensional coordinates of the vehicle at each moment are obtained, so that the user experience is ensured. Taking the application scene of the vehicle driving path navigation as an example, because the vehicle driving speed is high, the road condition of the vehicle driving road can be refreshed in the second level in the navigation process, for example, the residual time of the vehicle reaching the destination is usually displayed by taking the second as the minimum unit, so the road condition of the vehicle driving road can be refreshed every 1 second in the navigation process, and the latest road condition can be timely and synchronously checked by a driver. The frequency of refreshing the road condition of the traveling road of the vehicle can be set according to actual conditions, and the embodiment of the application does not limit the frequency.
Fig. 4 is a flow chart of step S220 in the embodiment shown in fig. 2 in an exemplary embodiment. As shown in fig. 4, the process of determining navigation data may include steps S410 to S420, which are described in detail as follows:
step S410: and acquiring front road section data corresponding to the current position of the vehicle from the high-precision map, and determining a road section heading angle of a shape point of the road section in front of the vehicle, wherein the front road section data comprises the gradient and the rotation degree of the road, and the road section heading angle is determined by the gradient and the rotation degree of the road.
Step S420: and acquiring the vehicle heading angle of the vehicle according to the current position of the vehicle and the turning angle of the steering wheel.
Fig. 5 is a flow chart of step S230 in the embodiment shown in fig. 2 in an exemplary embodiment. As shown in fig. 5, determining a navigation path according to the navigation data and calculating position coordinates of the navigation path in the vehicle body coordinate system may include steps S510 to S550, which are described in detail as follows:
step S510: and converting the coordinates of the path planning discrete points of the front road section corresponding to the current position of the vehicle into path points.
A plurality of discrete points are selected on the path planning curve, the discrete points are converted into path points represented by three-dimensional coordinate points, the number of the discrete points is not limited, and the discrete points can be set according to specific conditions.
Step S520: and screening the path points to leave effective path points.
In this embodiment, the path points are screened, and some over-dense points are deleted, leaving effective path points. The term "too dense" refers to the situation that the distance between a point and a point is too close, and how close the distance is too close is a customized parameter, which is adjustable, so that the calculation amount can be reduced on the basis of not influencing the final result.
Step S530: and forming a smooth curve according to the positions of the effective path points and the distances between the adjacent path points.
And carrying out smoothing processing according to the position of each given path point and the distance between two adjacent path points before and after the given path point so as to change the broken line segment into a smooth curve segment.
Step S540: and after the effective path points are fitted into a plurality of curve segments, connecting the curve segments to form a path planning curve.
In this embodiment, for example, every 4 route points may be connected to form a curve segment, and after a plurality of curve segments are formed, the plurality of curve segments may be connected to form a route planning curve.
Step S550: and mapping the current position of the vehicle to a path planning curve.
Fig. 6 is a flow chart of step S240 in the embodiment shown in fig. 2 in an exemplary embodiment. As shown in fig. 6, the process of generating the navigation guidance image may include steps S610 to S630, which are described in detail as follows:
step S610: and the human eye moving images are collected through the human eye detection camera.
In this embodiment, as described above, a human eye detection camera may be disposed inside the vehicle to collect a motion trajectory of a human eye. The human eye detection camera can be arranged at the connecting part of the front windshield and the roof in the carriage, is positioned in front of the driver, and can also be arranged at other parts as long as the human eye movement can be detected, and the specific installation position of the human eye detection camera is not limited here.
Step S620: and judging the three-dimensional coordinates of the human eyes according to a human eye motion detection software algorithm to generate a human eye motion track.
After the motion trail of the human eyes is collected, corresponding motion detection software is used for calculating, the three-dimensional coordinates of the human eyes can be calculated, and therefore the corresponding human eye motion trail is generated according to the obtained three-dimensional coordinates.
Step S630: and generating a navigation guide image by combining the current position of the vehicle, the motion track of the human eyes and the position coordinates of the navigation path.
In this embodiment, the current position of the vehicle can be determined by the GPS positioning and the vehicle heading angle. The navigation guidance image of the vehicle includes: an icon of a vehicle position, an arrow indicating a traveling direction, a name of an object displayed on a front windshield of the vehicle, a name of a road on which the vehicle travels, a current vehicle speed, a remaining number of kilometers, a remaining driving time, and the like. The navigation guidance image may further include an estimated arrival time derived from the system time, which is not limited in this embodiment.
Fig. 7 is a flow chart of step S250 in the embodiment shown in fig. 2 in an exemplary embodiment. As shown in fig. 7, the process of projecting the navigation guidance image to the vehicle windshield to achieve the navigation and live-action fusion may include steps S710 to S730, which are described in detail as follows:
step S710: and transmitting the navigation guide image to an AR-HUD display device.
Step S720: the AR-HUD display device projects to the front windshield of the vehicle to form a HUD light path.
Step S730: virtual images generated by the HUD light path are projected to a windshield of the vehicle and are fused with an actual road.
Determining the relative positions of the vehicle and the position of an object displayed on the front windshield of the vehicle and the corresponding indication information in the virtual navigation map in the sight direction of the driver; adjusting the position of each indication information in the virtual navigation map according to the relative position, so that the positions of the vehicle and the object displayed on the front windshield of the vehicle are overlapped with the position of the corresponding indication information in the virtual navigation map in the sight line direction of the driver; the AR navigation map is composed of an object displayed on a front windshield of the vehicle and a virtual navigation map.
In some exemplary embodiments, in order to further improve the accuracy of the real-time position obtained by the simulation, a more comprehensive situation, such as a moving habit of the navigation vehicle (e.g., a driving habit of the vehicle), a moving and static state of the navigation vehicle, whether the real-time position of the navigation vehicle is uniform in the moving process, and the like, needs to be considered in the simulation process of the real-time position of the navigation vehicle, so that the real-time position of the navigation vehicle can be simulated by using a machine learning method.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Among them, machine learning is the core of artificial intelligence, is a fundamental approach to make computers have intelligence, and is applied in various fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
Based on the strong learning ability of machine learning, the displacement offset estimation of the machine learning model on the omnibearing characteristics of the moving speed, moving direction, moving habit, moving state and static state of the navigation vehicle can be realized through the machine learning process aiming at a large amount of historical tracks, so that the estimated real-time position of the navigation vehicle is more accurate and credible. For example, the machine learning model may include a neural network-based supervision model, such as a two-class machine learning model, which is trained by using a large number of historical tracks, so that the machine learning model performs model parameter adjustment during the training process, and the adjusted model parameters have comprehensive prediction performance on all-around features of the navigation vehicle, such as moving speed, moving direction, moving habits, moving and static states, and the like.
Fig. 8 is a block diagram of a navigation and reality fusion apparatus according to an exemplary embodiment of the present disclosure, and fig. 9 is a schematic structural diagram of the navigation and reality fusion apparatus according to an exemplary embodiment of the present disclosure. The apparatus can be applied to the implementation environment shown in fig. 1, and is specifically configured in the intelligent terminal 110. The apparatus may also be applied to other exemplary implementation environments, and is specifically configured in other devices, and the embodiment does not limit the implementation environment to which the apparatus is applied.
As shown in fig. 8, the exemplary navigation and reality fusion apparatus includes:
the positioning module 801 is used for acquiring the current position and the driving direction of the vehicle; a navigation map module 802, configured to obtain navigation data based on the current position and the driving direction of the vehicle; the image processing module 803 is used for calculating the position coordinates of the navigation path under the vehicle body coordinates according to the navigation data; the image fusion module 804 is used for generating a navigation guide image according to the current position of the vehicle, the human eye movement track of the driver in the vehicle and the position coordinates of the navigation path; and the projection module 805 is used for projecting the navigation guidance image to a vehicle windshield so as to realize navigation and real scene fusion.
In another exemplary embodiment, the real-time positioning information of the navigation vehicle may be obtained by the vehicle-mounted positioning module 801, for example, the real-time GPS coordinates of the navigation vehicle may be obtained by the GPS module mounted on the navigation vehicle, and the GPS coordinates may be used as the real-time positioning information of the navigation vehicle.
In another exemplary embodiment, the navigation data obtained by the navigation map module 802 includes, but is not limited to, a slope and a rotation degree of a road surface and absolute coordinates of element information of a road surface passing by a vehicle from a current position to an end position, where the road element information includes longitude and latitude coordinates of a street lamp on a roadside and longitude and latitude coordinates of each point of a lane line on the road surface.
In another exemplary embodiment, the image processing module 803 may mathematically model an influence picture in the live-action image picture information base conforming to the periphery of the navigation road by mathematical modeling, so as to construct dimension information of the live-action image picture; then obtaining the dimension information of the navigation map of the position through the navigation map information; and then fusing according to the dimensional relationship between the dimensional information of the live-action image picture and the dimensional information of the navigation map. The road condition and the position of the vehicle can be accurately calculated through the road section course angle and the vehicle course angle, and the fusion effect between the dimension information of the live-action image picture and the dimension information of the navigation map is improved.
Referring to fig. 8 and 9, in another exemplary embodiment, the image fusion module 804 may generate the navigation guidance image by combining the current position of the vehicle, the motion track of the human eyes, and the position coordinates of the navigation path. The image fusion module 804 includes a human eye detection camera 901, human eye motion detection software 902, and a control unit 903, where the human eye detection camera 901 and the human eye motion detection software 902 are used to generate a human eye motion trajectory. The control unit 903 generates an AR navigation animation by combining the position coordinates of the vehicle position, the human eye movement track and the navigation path, an imaging model can be preset in the control unit 903 or by acquiring the imaging model preset in other devices of the vehicle, the parameters of the imaging model have an association relationship with the human eye position information acquired by the in-vehicle acquisition device, and parameter calibration can be performed according to the human eye position information. The image fusion module 804 may adjust the position of each indication information in the virtual navigation map according to the sight line direction of the driver, so that the indication information overlaps with the pointed vehicle or an object displayed on the front windshield of the vehicle in the sight line direction of the driver. The AR navigation map is configured by an object displayed on a front windshield of the vehicle and the virtual guide information.
In another exemplary embodiment, the projection module 805 projects the navigation guidance image onto the vehicle windshield to achieve navigation and live-action fusion, improving the driving experience of the driver. In one example, the projection module 805 is mounted in front of the steering wheel, below the windshield. The projection module 805 includes: a curved mirror 904, a Picture Generation Unit (PGU) 905, a mirror 906 and a motor 907, wherein the curved mirror 904 is used for reflecting and projecting an image to a windshield to form a HUD light path; the PGU 905 is used for image generation; the light is reflected to the windshield by the reflector 906, so that the display definition and the stability of the invention are improved; the motor 907 is used to adjust the display height of the virtual image so as to meet the demands of different drivers.
It should be noted that the navigation and reality fusion apparatus provided in the foregoing embodiment and the navigation and reality fusion method provided in the foregoing embodiment belong to the same concept, and specific manners of operations performed by each module and unit have been described in detail in the method embodiment, and are not described herein again. In practical applications, the navigation and reality fusion apparatus provided in the above embodiment may distribute the functions through different function modules according to needs, that is, divide the internal structure of the apparatus into different function modules to complete all or part of the functions described above, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, enable the electronic device to implement the navigation and reality fusion method provided in the above embodiments.
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the application scope of the embodiments of the present application.
As shown in fig. 10, the computer system 1000 includes a CPU 1001, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for system operation are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An Input/Output (I/O) interface 1005 is also connected to the bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a Network interface card such as a Local Area Network (LAN) card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. When the computer program is executed by a Central Processing Unit (CPU) 1001, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a propagated data signal with a computer-readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor of a computer, causes the computer to execute the navigation and live-action fusion method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist alone without being assembled into the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the navigation and live-action fusion method provided in the above embodiments.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A navigation and live-action fusion method is characterized by comprising the following steps:
acquiring the current position and the running direction of a vehicle;
determining navigation data based on the current position and driving orientation of the vehicle;
determining a navigation path according to the navigation data, and calculating the position coordinate of the navigation path under a vehicle body coordinate system;
generating a navigation guide image according to the current position of the vehicle, the human eye movement track of a driver in the vehicle and the position coordinates of the navigation path;
and projecting the navigation guide image to a vehicle windshield to realize navigation and live-action fusion.
2. The navigation and live-action fusion method according to claim 1, wherein the obtaining of the current position and driving orientation of the vehicle comprises:
acquiring positioning data of a vehicle to determine the current position of the vehicle and the running direction of the vehicle;
and obtaining the position of the vehicle at each moment according to the positioning data refreshing frequency and the real-time vehicle speed information.
3. The navigation and live-action fusion method according to claim 1, wherein determining navigation data based on the current position and driving orientation of the vehicle comprises:
acquiring front road section data corresponding to the current position of the vehicle from a map, and determining a road section course angle of a shape point of a road section in front of the vehicle, wherein the front road section data comprises a slope and a rotation degree;
and acquiring the vehicle course angle of the vehicle according to the current position of the vehicle and the turning angle of the steering wheel.
4. The navigation and real world fusion method according to claim 3, wherein determining a navigation path according to the navigation data and calculating the position coordinates of the navigation path in the body coordinate system comprises:
converting the coordinates of the path planning discrete points of the front road section corresponding to the current position of the vehicle into path points;
screening the path points and leaving effective path points;
forming a smooth curve according to the positions of the effective path points and the distances between the adjacent path points;
after the effective path points are fitted into a plurality of smooth curve segments, connecting the smooth curve segments to form a path planning curve;
and mapping the current position of the vehicle to the path planning curve.
5. The navigation and live-action fusion method according to claim 1, wherein generating a navigation guidance image according to the current position of the vehicle, the eye movement track of the driver in the vehicle, and the position coordinates of the navigation path comprises:
collecting human eye moving images through a human eye detection camera;
judging three-dimensional coordinates of human eyes according to a human eye motion detection software algorithm to generate a human eye motion track;
and generating a navigation guide image by combining the current position of the vehicle, the motion track of the human eyes and the position coordinates of the navigation path.
6. The navigation and live-action fusion method of claim 1, wherein projecting the navigation guidance image to a vehicle windshield to achieve navigation and live-action fusion comprises:
transmitting the navigation guidance image to an AR-HUD display device;
the AR-HUD display device projects the light to the front windshield of the vehicle to form a HUD light path;
virtual images generated by the HUD light path are projected to a windshield of the vehicle and are fused with an actual road.
7. A navigation and live-action fusion device is characterized by comprising:
the positioning module is used for acquiring the current position and the running direction of the vehicle;
the navigation map module is used for determining navigation data based on the current position and the running direction of the vehicle;
the image processing module is used for calculating the position coordinates of the navigation path under the vehicle body coordinates according to the navigation data;
the image fusion module is used for generating a navigation guide image according to the current position of the vehicle, the human eye movement track of a driver in the vehicle and the position coordinates of the navigation path;
and the projection module is used for projecting the navigation guide image to a vehicle windshield so as to realize the fusion of navigation and real scene.
8. The navigation and reality fusion apparatus of claim 7, wherein the positioning module, the navigation map module, the image processing module, the image fusion module and the projection module are in communication connection.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the navigation and reality fusion method of any of claims 1-6.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the navigation and live-action fusion method of any one of claims 1 to 6.
CN202211055929.5A 2022-08-31 2022-08-31 Navigation and live-action fusion method and device, electronic equipment and storage medium Pending CN115406462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211055929.5A CN115406462A (en) 2022-08-31 2022-08-31 Navigation and live-action fusion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211055929.5A CN115406462A (en) 2022-08-31 2022-08-31 Navigation and live-action fusion method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115406462A true CN115406462A (en) 2022-11-29

Family

ID=84164267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211055929.5A Pending CN115406462A (en) 2022-08-31 2022-08-31 Navigation and live-action fusion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115406462A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091740A (en) * 2023-04-11 2023-05-09 江苏泽景汽车电子股份有限公司 Information display control method, storage medium and electronic device
CN116105747A (en) * 2023-04-07 2023-05-12 江苏泽景汽车电子股份有限公司 Dynamic display method for navigation path, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116105747A (en) * 2023-04-07 2023-05-12 江苏泽景汽车电子股份有限公司 Dynamic display method for navigation path, storage medium and electronic equipment
CN116091740A (en) * 2023-04-11 2023-05-09 江苏泽景汽车电子股份有限公司 Information display control method, storage medium and electronic device

Similar Documents

Publication Publication Date Title
US11340355B2 (en) Validation of global navigation satellite system location data with other sensor data
US11365976B2 (en) Semantic label based filtering of objects in an image generated from high definition map data
EP3759562B1 (en) Camera based localization for autonomous vehicles
US20200192402A1 (en) Algorithm and infrastructure for robust and efficient vehicle localization
US11727272B2 (en) LIDAR-based detection of traffic signs for navigation of autonomous vehicles
CN111204336B (en) Vehicle driving risk assessment method and device
US10474157B2 (en) Data-based control error detection and parameter compensation system
US20230017502A1 (en) Determining localization confidence of vehicles based on convergence ranges
US11598876B2 (en) Segmenting ground points from non-ground points to assist with localization of autonomous vehicles
CN115406462A (en) Navigation and live-action fusion method and device, electronic equipment and storage medium
CN110914777A (en) High-definition map and route storage management system for autonomous vehicle
US20200401817A1 (en) Determining weights of points of a point cloud based on geometric features
KR20200029785A (en) Localization method and apparatus of displaying virtual object in augmented reality
CN111295569B (en) System and method for generating road map
CN111402387B (en) Removing short-time points from a point cloud for navigating a high-definition map of an autonomous vehicle
CN111121815B (en) Path display method, system and computer storage medium based on AR-HUD navigation
US11989805B2 (en) Dynamic geometry using virtual spline for making maps
EP3885706A1 (en) Real-time rem localization error correction by consumer automated vehicles
CN114067120B (en) Augmented reality-based navigation paving method, device and computer readable medium
US11908095B2 (en) 2-D image reconstruction in a 3-D simulation
CN114689063A (en) Map modeling and navigation guiding method, electronic device and computer program product
CN114056337A (en) Vehicle driving behavior prediction method, device and computer program product
KR102482829B1 (en) Vehicle AR display device and AR service platform
CN113822124A (en) Lane level positioning method, device, equipment and storage medium
CN115773762A (en) Vehicle pose positioning processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination