CN117762365A - Navigation display method, device, vehicle and storage medium - Google Patents

Navigation display method, device, vehicle and storage medium Download PDF

Info

Publication number
CN117762365A
CN117762365A CN202311699724.5A CN202311699724A CN117762365A CN 117762365 A CN117762365 A CN 117762365A CN 202311699724 A CN202311699724 A CN 202311699724A CN 117762365 A CN117762365 A CN 117762365A
Authority
CN
China
Prior art keywords
image
target
vehicle
information
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311699724.5A
Other languages
Chinese (zh)
Inventor
邢晨
贾澜鹏
何晶
谭竞扬
罗马
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202311699724.5A priority Critical patent/CN117762365A/en
Publication of CN117762365A publication Critical patent/CN117762365A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

The application provides a navigation display method, a device, a vehicle and a storage medium, wherein the method is applied to the vehicle, the vehicle comprises head-up display equipment and a front window, and the method comprises the following steps: acquiring a first image, wherein the first image comprises at least one type of scene information around a vehicle; acquiring map information of a vehicle, and mapping the map information into a first image to obtain a target image; acquiring a target human eye position of a driver, and determining a target display area of a front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window; and controlling the head-up display device to project scene information and map information in the target image to the target display area. According to the method and the device for displaying the scene information and the map information in the target image, the scene information and the map information in the target image can be projected to the target display area corresponding to the human eye position, navigation display content is increased while navigation following the human eye position is achieved, and user experience is improved.

Description

Navigation display method, device, vehicle and storage medium
Technical Field
The present disclosure relates to the field of navigation technologies, and in particular, to a navigation display method, a device, a vehicle, and a storage medium.
Background
With the rapid development of navigation technology, more and more drivers of vehicles rely on navigation systems to provide information required for driving, such as map information, route planning to a destination, navigation information, and so on. In general, a vehicle is subjected to navigation display through a central control screen, and a driver can have driving danger if frequently looking down at the central control screen in the driving process. Therefore, the head-up display device (HeadUpDisplay, HUD) is arranged on the front window of the vehicle to project navigation information to the front window so as to improve driving safety, and based on the HUD technology, a driver can expand and strengthen the perception of the driver to driving environment, so that driving safety and technological sense are improved. The driver does not have to switch between viewing the road in the distant place and the center screen.
In the related art, in the process of using the head-up display device to watch the navigation information, the head-up display device can only display the navigation information at the view angle right in front of the driver, and in order to avoid the interference of the navigation information on the view of the driver, the navigation information only displays less content, so that the user experience is affected.
Disclosure of Invention
The method can project scene information and map information in a target image to a target display area corresponding to a human eye position, so that navigation display content is increased while navigation following the human eye position is realized, and user experience is improved.
In a first aspect, a navigation display method is provided, applied to a vehicle, where the vehicle includes a head-up display device and a front window, and the method includes:
acquiring a first image, wherein the first image comprises at least one type of scene information around the vehicle;
acquiring map information of the vehicle, and mapping the map information into the first image to obtain a target image;
acquiring a target human eye position of a driver, and determining a target display area of the front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window;
and controlling the head-up display device to project the scene information and the map information in the target image to the target display area.
In a second aspect, a navigation display apparatus is provided for use in a vehicle including a head-up display device and a front window, the apparatus comprising:
a first acquisition module for acquiring a first image, wherein the first image comprises at least one type of scene information around the vehicle;
the second acquisition module is used for acquiring map information of the vehicle and mapping the map information into the first image to obtain a target image;
The third acquisition module is used for acquiring the target human eye position of the driver and determining the target display area of the front window based on the target human eye position, wherein the human eye position of the driver and the display area of the front window have a corresponding position relationship;
and the control module is used for controlling the head-up display device to project the scene information and the map information in the target image to the target display area.
In a third aspect, there is provided a vehicle comprising:
a memory for storing executable program code;
a processor for calling and running the executable program code from the memory to cause the vehicle to perform the navigation display method as claimed in any one of the preceding claims.
In a fourth aspect, there is provided a computer readable storage medium storing a computer program which, when executed, implements the navigation display method of any one of the above.
The technical scheme provided by some embodiments of the present application has the beneficial effects that at least includes: in the method, the map information of the vehicle is mapped into the first image comprising the scene information around the vehicle to form the target image, and the map information and the scene information are contained in the target image, so that the content of navigation display is richer to meet different requirements of users. In addition, the target display area of the front window corresponding to the human eye position of the driver is determined through the human eye position of the driver, so that scene information and map information projected by the head-up display device can be changed along with the change of the human eye position to switch the projection position, and user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a navigation display method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a second method for displaying navigation according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a navigation display device provided in the present application.
Fig. 4 is a schematic view of a first structure of a vehicle according to an embodiment of the present application.
Fig. 5 is a schematic view of a second structure of a vehicle according to an embodiment of the present application.
Detailed Description
In order to make the features and advantages of the present application more comprehensible, the following description will be given in detail with reference to the accompanying drawings in which embodiments of the present application are shown, and it is apparent that the described embodiments are merely some but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
In general, a vehicle is subjected to navigation display through a central control screen, and a driver can have driving danger if frequently looking down at the central control screen in the driving process. Therefore, the head-up display device is arranged on the front window of the vehicle to project navigation information to the front window so as to improve driving safety. In the related art, in the process of using the head-up display device to watch the navigation information, the head-up display device can only display the navigation information at the view angle right in front of the driver, and in order to avoid the interference of the navigation information on the view of the driver, the navigation information only displays less content, so that the user experience is affected.
In order to solve the technical problems in the related art, the embodiment of the application provides a navigation display method. The following detailed description is given, respectively, of the embodiments, and the description sequence of the following embodiments is not to be taken as a limitation of the preferred sequence of the embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of a navigation display method according to an embodiment of the disclosure. The navigation display method is applied to a vehicle, and the vehicle at least comprises a head-up display device and a front window so as to project navigation information to a display area of the front window through the head-up display device. The specific flow of the navigation display method can be as follows:
s101, acquiring a first image, wherein the first image comprises at least one kind of scene information around a vehicle.
In this embodiment, the acquired first image includes at least one type of scene information around the vehicle, and specifically, the image to be processed corresponding to the scene around the vehicle may be acquired; and carrying out image segmentation processing on the image to be processed based on the target pixel label to obtain a first image carrying at least one kind of scene information.
The image to be processed can be an initial image of a scene around the vehicle, which is acquired by a digital video camera (Digital Video Recorder, DVR), namely a driving recorder, and the image to be processed can comprise scene information of categories such as buildings, sky, pedestrians, traffic lights, other vehicles, road signs, obstacles and the like in front of the vehicle. Each scene information can be provided with a corresponding target pixel tag, so that the image to be processed can be subjected to image segmentation processing through the target pixel tag, and scenes belonging to the same pixel tag are divided into the same type of scene information, so that a first image carrying at least one type of scene information is obtained.
For example, the road sign is used as a first pixel tag, the pedestrian is used as a second pixel tag, and other vehicles are used as a third pixel tag, so that the first image carrying the road sign information can be obtained by performing image segmentation processing on the image to be processed based on the first pixel tag; image segmentation processing is carried out on the image to be processed based on the second pixel label, so that a first image carrying pedestrian information can be obtained; and performing image segmentation processing on the image to be processed based on the third pixel tag, so as to obtain a first image carrying other vehicle information.
S102, acquiring map information of the vehicle, and mapping the map information into the first image to obtain a target image.
Optionally, map information of the vehicle can be obtained through various positioning modes such as a global positioning system, an inertial navigation system, a wireless positioning technology, a vehicle-mounted sensor and the like, the map information is mapped into a first image carrying scene information, and a target image is obtained, so that the map information and the scene information in the target image can be projected to a target display area of a front window of the vehicle through a HUD technology, the problem of driving danger caused by low head of a driver looking at a central control screen is avoided, more navigation information can be displayed on the front window, and driving experience of the driver is further improved. The map information may include information such as a traveling speed of the vehicle, a traveling route, and a distance from the destination.
In this embodiment, a second image carrying map information of the vehicle may be acquired; acquiring the position of a first source corresponding to the first image and the position of a second source corresponding to the second image; obtaining a first transformation matrix between the first image and the second image based on the position of the first source and the position of the second source; and mapping the map information in the second image into the first image according to the first transformation matrix to obtain the target image.
Specifically, the vehicle can be positioned by identifying information such as road identification, buildings or obstacles in the running process of the vehicle through the vehicle-mounted camera, the radar or the laser radar and other vehicle-mounted sensors, so that a second image carrying map information of the vehicle is obtained. The second image is obtained through the vehicle-mounted sensor, and then the position set by the vehicle-mounted sensor is the position of a second source corresponding to the second image. The first image is obtained through the digital video camera, and then the position set by the digital video camera is the position of the first source corresponding to the first image.
It can be understood that, because the digital video camera and the vehicle-mounted sensor are fixed at the positions where the vehicle is arranged, the relative position relationship between the first source position, i.e. the digital video camera, and the second source position, i.e. the vehicle-mounted sensor, can be obtained, so as to obtain a first transformation matrix for realizing image registration between the first image and the second image in a manner of translation, rotation, inclination, scale transformation, and the like, and the first transformation matrix is a homography matrix, so that the second image and the first image can be realized image registration based on the first transformation matrix, and map information in the second image can be mapped into the first image, so as to obtain the target image.
S103, acquiring a target human eye position of the driver, and determining a target display area of the front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window.
It should be noted that, the HUD technology is a technology of projecting navigation information onto a front window of a vehicle in a visual field of a driver, so that the driver can obtain status information of the vehicle without moving a line of sight away from a road, but generally only can project information onto a fixed area of the front window, such as a position right in front of the visual field of the driver, when the line of sight of the driver changes, such as a rearview mirror, the driver cannot watch the information right in front of the visual field in time, so that part of information is missed. And because the height of different drivers is different, the visual effect is also different when watching the information that the fixed region of front window projected, influences experience sense.
In this embodiment, by acquiring the human eye position of the driver, the third transformation matrix corresponding to the relative positional relationship between the human eye position and the display area of the front window is calculated based on the acquired human eye position of the driver. Based on the third transformation matrix, the information displayed in the display area of the front window can realize position transformation in a manner of translation, rotation, scaling and the like so as to always face the direction directly seen by the human eye position of the driver, namely, the target display area mapped with the map information and the scene information in the target image can be switched to the view angle corresponding to the human eye position of the driver in real time through the first transformation matrix, so that the driver can watch navigation information at any time, and information omission is avoided.
Specifically, the target eye position of the driver can be obtained, and the map information and the scene information are moved to the target display area corresponding to the direction in which the eye position of the driver directly looks in a translation, rotation, scaling and other modes based on the target eye position and the third transformation matrix, wherein the target display area is a real-time display position in which the driver looks at driving information.
And S104, controlling the head-up display device to project scene information and map information in the target image to the target display area.
Alternatively, after the target display area of the front window corresponding to the target eye position of the driver is acquired, the head-up display device may be controlled to project the scene information and the map information in the target image to the target display area. The scene information and the map information in the target display area can be rendered through the head-up display device, so that the display effect of the information is enhanced, and the driving experience is further improved.
As can be seen from the above, the present embodiment obtains a first image, where the first image includes at least one type of scene information around a vehicle; acquiring map information of a vehicle, and mapping the map information into a first image to obtain a target image; acquiring a target human eye position of a driver, and determining a target display area of a front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window; and controlling the head-up display device to project scene information and map information in the target image to the target display area. Map the map information of the vehicle to the first image comprising scene information around the vehicle, form the target image, make the target image include map information and scene information, thus make the content of the navigation display richer in order to meet different demands of users. In addition, the target display area of the front window corresponding to the human eye position of the driver is determined through the human eye position of the driver, so that scene information and map information projected by the head-up display device can be changed along with the change of the human eye position to switch the projection position, and user experience is improved.
The method described in the previous examples is described in further detail below by way of example. Referring to fig. 2, fig. 2 is a schematic flow chart of a navigation display method according to an embodiment of the disclosure. The navigation display method is applied to a vehicle, and the vehicle at least can comprise a head-up display device and a front window. The specific flow of the navigation display method can comprise the following steps:
s201, obtaining an image to be processed corresponding to a scene around the vehicle, and performing image segmentation on the image to be processed based on the target pixel labels to obtain a first image carrying at least one type of scene information.
In this embodiment, the image to be processed may be an initial image of a scene around the vehicle, and the image to be processed may include scene information of a building, sky, pedestrians, traffic lights, other vehicles, road signs, obstacles, and the like in front of the vehicle, through a digital video camera, i.e., a driving recorder. Each scene information can be provided with a corresponding target pixel tag, so that the image to be processed can be subjected to image segmentation processing through the target pixel tag, and scenes belonging to the same pixel tag are divided into the same type of scene information, so that a first image carrying at least one type of scene information is obtained.
For example, the road sign is used as a first pixel tag, the pedestrian is used as a second pixel tag, and other vehicles are used as a third pixel tag, so that the first image carrying the road sign information can be obtained by performing image segmentation processing on the image to be processed based on the first pixel tag; image segmentation processing is carried out on the image to be processed based on the second pixel label, so that a first image carrying pedestrian information can be obtained; and performing image segmentation processing on the image to be processed based on the third pixel tag, so as to obtain a first image carrying other vehicle information.
Optionally, the image to be processed may be subjected to image segmentation processing in a semantic segmentation manner, so that scene information around the vehicle may be divided according to different categories, and a first image carrying at least one type of scene information may be obtained. The semantic segmentation is to divide pixel level, each pixel in the image to be processed is divided into corresponding categories, and the image to be processed is divided into a plurality of mutually disjoint areas according to the characteristics of gray level, color, space texture, geometric shape and the like, so that scene information belonging to the same semantic respectively displays the same characteristics. For example, the first image includes three types of scene information corresponding to three pixel labels, which are pedestrians, road signs and buildings, and all pedestrians in the first image can be displayed in blue, all road signs in the first image can be displayed in red, and all buildings in the first image can be displayed in green, etc. according to the same semantics.
Optionally, the image to be processed may be subjected to image segmentation processing by means of instance segmentation, so that different individuals belonging to the same category of the same semantic are distinguished on the basis of semantic segmentation, unlike semantic segmentation, the scene information of a plurality of categories screened by the target pixel tag may be subjected to individual segmentation by means of instance segmentation. For example, the target pixel labels are pedestrians, road signs and buildings respectively, so that all pedestrians, all road signs and all obstacles in the image to be processed can be segmented one by one through example segmentation, for example, all pedestrians are blue, and a pedestrian A is dark blue and a pedestrian B is light blue; all the road signs are red, while the road signs 1 are dark red and the road signs 2 are bright red; all buildings are shown green, while building a shows dark green, building b shows light green, etc.
In order to further segment the image of all semantic scene information in the image to be processed, a panoramic instance segmentation mode can be adopted to segment the scene, namely, all types of objects in the image to be processed are used as target pixel labels to be segmented, so that all scene information around the vehicle is contained in the obtained segmented image. It can be understood that by performing panoramic example segmentation on scene information around the vehicle, more road information can be presented in the visual field of the driver when the navigation information is projected to the front window for display through the head-up display device, so that the driving experience of the driver can be improved, and the driving safety can be further improved. For example, the driver may be alerted to the pedestrian by displaying pedestrian information; for another example, by displaying the road sign information, the driver can be prevented from walking by mistake or the like.
S202, acquiring map information of the vehicle, and mapping the map information into the first image to obtain a target image.
In this embodiment, a second image carrying map information of the vehicle may be acquired; acquiring the position of a first source corresponding to the first image and the position of a second source corresponding to the second image; obtaining a first transformation matrix between the first image and the second image based on the position of the first source and the position of the second source; and mapping the map information in the second image into the first image according to the first transformation matrix to obtain the target image.
Specifically, the vehicle can be positioned by identifying information such as road identification, buildings or obstacles in the running process of the vehicle through the vehicle-mounted camera, the radar or the laser radar and other vehicle-mounted sensors, so that a second image carrying map information of the vehicle is obtained. The second image is obtained through the vehicle-mounted sensor, and then the position set by the vehicle-mounted sensor is the position of a second source corresponding to the second image. The first image is obtained through the digital video camera, and then the position set by the digital video camera is the position of the first source corresponding to the first image.
It can be understood that, because the digital video camera and the vehicle-mounted sensor are fixed at the positions where the vehicle is arranged, the relative position relationship between the first source position, i.e. the digital video camera, and the second source position, i.e. the vehicle-mounted sensor, can be obtained, so as to obtain a first transformation matrix for realizing image registration between the first image and the second image in a manner of translation, rotation, inclination, scale transformation, and the like, and the first transformation matrix is a homography matrix, so that the second image and the first image can be realized image registration based on the first transformation matrix, and map information in the second image can be mapped into the first image, so as to obtain the target image.
S203, performing position compensation in the horizontal direction on the vehicle position in the target image.
It should be noted that, since the image to be processed captured by the driving recorder is the actual scene information captured based on the actual driving position of the vehicle, and the positioning information corresponding to the second image is the positioning based on the position of the vehicle in the middle of the road, if the vehicle is driving on the left and right sides of the road, a certain deviation occurs between the scene information and the map information in the target image, which affects the driving experience of the driver.
To solve this problem, the present embodiment performs position compensation in the horizontal direction for the vehicle position in the target image. Specifically, scene information and map information in the target image are used as input data, and the position of the vehicle in the road is used as verification data; training is performed in a preset training model based on the input data and the verification data, and a target training result is obtained, wherein the target training result indicates the central position of the vehicle in the road along the direction perpendicular to the running direction of the vehicle. Therefore, the image compensation can be performed on the target image based on the central position of the road of the vehicle, so that the first image and the second image can be subjected to image calibration, namely, the map information and the scene information which are matched with each other are obtained, and the navigation accuracy is improved.
S204, acquiring the human eye position of the driver, and calculating to obtain a third transformation matrix corresponding to the relative position relation between the human eye position and the display area of the front window.
It should be noted that, the position of the vehicle driver recorder of the vehicle is fixed, that is, the position of the first source corresponding to the first image is fixed, and the first image obtains the target image after mapping the map information of the second image. In order to meet the requirement that the map information and the scene information projected to the front window by the head-up display device always follow the sight line direction of a driver, the positions of the map information and the scene information in the display area of the front window are not fixed, so that a second transformation matrix between a target image and an image displayed in the display area of the front window can be obtained based on the position of the first source and the corresponding position of the display area of the front window while ensuring that the map information and the scene information can be always mapped in the display area of the front window; according to the second transformation matrix, the scene information and the map information in the target image are mapped to the display area of the front window.
It should be noted that, the HUD technology is a technology of projecting navigation information onto a front window of a vehicle in a visual field of a driver, so that the driver can obtain status information of the vehicle without moving a line of sight away from a road, but generally only can project information onto a fixed area of the front window, such as a position right in front of the visual field of the driver, when the line of sight of the driver changes, such as a rearview mirror, the driver cannot watch the information right in front of the visual field in time, so that part of information is missed. And because the height of different drivers is different, the visual effect is also different when watching the information that the fixed region of front window projected, influences experience sense.
In this embodiment, by acquiring the human eye position of the driver, the third transformation matrix corresponding to the relative positional relationship between the human eye position and the display area of the front window is calculated based on the acquired human eye position of the driver. Based on the third transformation matrix, the information displayed in the display area of the front window can realize position transformation in a manner of translation, rotation, scaling and the like so as to always face the direction directly seen by the human eye position of the driver, namely, the target display area mapped with the map information and the scene information in the target image can be switched to the view angle corresponding to the human eye position of the driver in real time through the first transformation matrix, so that the driver can watch navigation information at any time, and information omission is avoided.
Specifically, face feature information of the driver and coordinate data corresponding to the face feature information can be obtained; determining the human eye position of a driver based on the human face characteristic information and the coordinate data corresponding to the human face characteristic information; and obtaining a third transformation matrix between the human eye position and the position corresponding to the display area of the front window based on the human eye position and the position corresponding to the display area of the front window.
The method comprises the steps of carrying out face feature detection and face key point detection on a driver through a DMS camera and an OMS camera respectively to obtain face feature information of the driver and coordinate data corresponding to the face feature information, so that positions of the centers of eyes of the face corresponding to images shot by the DMS camera and images shot by the OMS camera are obtained respectively, the faces of the driver are calibrated to obtain specific positions of the faces in a vehicle, and then the specific positions of the centers of the eyes in the vehicle are obtained through a binocular vision measurement principle, namely the positions of eyes of the driver are determined. It should be noted that the DMS camera and the OMS camera may be cameras of the vehicle, where the DMS camera may be disposed above the driver to monitor the driver, and the OMS camera may be disposed at a position of a rearview mirror in the vehicle to monitor a front passenger and a rear passenger, and the determination of the human eye positions by the DMS camera and the OMS camera may enable one of the two to be multipurpose, so that resource waste is avoided without setting other cameras, and the positions of the DMS camera and the OMS camera are fixed in the vehicle, so that the human face position can be more conveniently confirmed.
Because the position of the human eyes is continuously changed along with the driving process of the driver, the position displayed by the map information and the scene information in the display area of the front window can be moved to the direction corresponding to the position of the human eyes in a translation, rotation and other modes through the third transformation matrix, so that the driver can watch useful navigation information at any time, and the driving experience is improved.
S205, acquiring a target human eye position of a driver, and determining a target display area of the front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window.
Specifically, the target eye position of the driver can be obtained, and the map information and the scene information are moved to the target display area corresponding to the direction in which the eye position of the driver directly looks in a translation, rotation, scaling and other modes based on the target eye position and the third transformation matrix, wherein the target display area is a real-time display position in which the driver looks at driving information.
S206, determining the position of the scene information in the target display area, and controlling the head-up display device to project the prompt information to the position corresponding to the scene information and/or project the marked scene information to the target display area.
Alternatively, after the target display area of the front window corresponding to the target eye position of the driver is acquired, the head-up display device may be controlled to project the scene information and the map information in the target image to the target display area. The scene information and the map information in the target display area can be rendered through the head-up display device, so that the display effect of the information is improved.
In addition, since the map information and the scene information are projected in the target display area of the front window, in order to further improve driving experience, other prompt information can be added to increase interaction with the driver. Specifically, the position of the scene information in the target display area can be determined, and the head-up display device is controlled to project the prompt information to the position corresponding to the scene information; and/or projecting the marked scene information to a target display area.
For example, scene information projected on a target display area of a front window includes pedestrians, obstacles, etc., map information includes a travel route, a travel vehicle speed, etc., and since the pedestrians are moving continuously, there is a possibility that the pedestrians are suddenly appearing on the target display area of the front window, and since the pedestrians are small in size relative to other vehicles and buildings, the area displayed on the target display area is not obvious, and if the driver is not attentive enough, there is a possibility that the sudden pedestrians are not observed, resulting in driving danger. Therefore, prompt information such as early warning symbols and the like can be added around uncertain scene information such as pedestrians and obstacles, so that drivers can be reminded of carrying out emergency avoidance on the pedestrians and the obstacles, and driving risks are reduced. Of course, the scene information of uncertain factors such as pedestrians and obstacles may be marked first, for example, colors of pedestrians and obstacles may be marked with more striking colors relative to the building, and the marked scene information may be projected to the target display area to remind the driver.
As can be seen from the above, in this embodiment, the first image carrying at least one type of scene information is obtained by acquiring an image to be processed corresponding to a scene around the vehicle, and performing image segmentation on the image to be processed based on the target pixel tag; acquiring map information of a vehicle, and mapping the map information into a first image to obtain a target image; performing position compensation in the horizontal direction on the vehicle position in the target image; acquiring the human eye position of a driver, and calculating to obtain a third transformation matrix corresponding to the relative position relation between the human eye position and the display area of the front window; acquiring a target human eye position of a driver, and determining a target display area of a front window based on the target human eye position; determining the position of the scene information in the target display area, and controlling the head-up display device to project the prompt information to the position corresponding to the scene information and/or project the scene information after the marking process to the target display area. Map the map information of the vehicle to the first image comprising scene information around the vehicle, form the target image, make the target image include map information and scene information, thus make the content of the navigation display richer in order to meet different demands of users. In addition, the target display area of the front window corresponding to the human eye position of the driver is determined through the human eye position of the driver, so that scene information and map information projected by the head-up display device can be changed along with the change of the human eye position to switch the projection position, and user experience is improved.
In addition, the embodiment of the application also provides a navigation display device. Referring to fig. 3, fig. 3 is a schematic structural diagram of a navigation display device according to an embodiment of the disclosure. The navigation display apparatus 300 may be applied to a vehicle including at least a head-up display device and a front window. Specifically, the navigation display apparatus 300 may include a first acquisition module 301, a second acquisition module 302, a third acquisition module 303, and a control module 304, which are specifically as follows:
a first obtaining module 301, configured to obtain a first image, where the first image includes at least one type of scene information around a vehicle;
the second obtaining module 302 is configured to obtain map information of a vehicle, and map the map information into the first image to obtain a target image;
a third obtaining module 303, configured to obtain a target human eye position of the driver, and determine a target display area of the front window based on the target human eye position, where the human eye position of the driver has a corresponding positional relationship with the display area of the front window;
and the control module 304 is used for controlling the head-up display device to project the scene information and the map information in the target image to the target display area.
In some embodiments, the first acquisition module 301 may also be configured to: acquiring an image to be processed corresponding to a scene around a vehicle; and carrying out image segmentation on the image to be processed based on the target pixel label to obtain a first image carrying at least one kind of scene information.
Further, the second obtaining module 302 may be further configured to: acquiring a second image carrying map information of the vehicle; acquiring the position of a first source corresponding to a first image and the position of a second source corresponding to a second image; obtaining a first transformation matrix between the first image and the second image based on the position of the first source and the position of the second source; and mapping the map information in the second image into the first image according to the first transformation matrix to obtain the target image.
In some embodiments, the navigation display device 300 may further include a training module, after mapping the map information in the second image into the first image according to the first transformation matrix, the training module may be configured to: taking scene information and map information in a target image as input data and the position of a vehicle in a road as verification data; training is performed in a preset training model based on the input data and the verification data, and a target training result is obtained, wherein the target training result indicates the central position of the vehicle in the road along the direction perpendicular to the running direction of the vehicle.
In some embodiments, the navigation display apparatus 300 may further include a mapping module, after mapping the map information in the second image into the first image according to the first transformation matrix, the mapping module may be configured to: obtaining a second transformation matrix between the target image and the image displayed in the display area of the front window based on the position of the first source and the position corresponding to the display area of the front window; according to the second transformation matrix, the scene information and the map information in the target image are mapped to the display area of the front window.
Optionally, the obtaining module 301 may further be configured to: and acquiring a second image to be processed carrying map information, and taking the second image to be processed as a target image.
In some embodiments, the navigation display device 300 may further include a processing module, which may be configured to, prior to acquiring the target human eye position of the driver and determining the target display area of the front window based on the target human eye position: acquiring face characteristic information of a driver and coordinate data corresponding to the face characteristic information; determining the human eye position of a driver based on the human face characteristic information and the coordinate data corresponding to the human face characteristic information; and obtaining a third transformation matrix between the human eye position and the position corresponding to the display area based on the human eye position and the position corresponding to the display area of the front window.
Further, the third obtaining module 303 may be further configured to: obtaining a target human eye position of a driver; determining a target display area of the front window based on the target human eye position and the third transformation matrix; the scene information and the map information in the target image are mapped to the target display area based on the target display area of the front window and the second transformation matrix.
Optionally, the control module 304 may also be configured to: determining the position of scene information in a target display area; and controlling the head-up display device to project the prompt information to a position corresponding to the scene information and/or projecting the marked scene information to the target display area.
It should be noted that, the navigation display device provided in the embodiment of the present application and the navigation display method in the above embodiment belong to the same concept, and any method provided in the navigation display method embodiment may be run on the navigation display device, and the specific implementation process of the method is detailed in the navigation display method embodiment and will not be repeated herein.
In this embodiment, the navigation display apparatus 300 acquires a first image through the first acquisition module 301, wherein the first image includes at least one kind of scene information around the vehicle; acquiring map information of the vehicle through a second acquisition module 302, and mapping the map information into a first image to obtain a target image; acquiring a target human eye position of a driver based on the third acquisition module 303, and determining a target display area of the front window based on the target human eye position; the head-up display device is controlled by the control module 304 to project scene information and map information in the target image to the target display area. Map the map information of the vehicle to the first image comprising scene information around the vehicle, form the target image, make the target image include map information and scene information, thus make the content of the navigation display richer in order to meet different demands of users. In addition, the target display area of the front window corresponding to the human eye position of the driver is determined through the human eye position of the driver, so that scene information and map information projected by the head-up display device can be changed along with the change of the human eye position to switch the projection position, and user experience is improved.
The present embodiment also provides a computer-readable storage medium having stored therein computer program code which, when run on a computer, causes the computer to perform the above-described related method steps to implement a navigation display method provided by the above-described embodiments.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any of the navigation display methods provided in the embodiments of the present application, so that the beneficial effects that any of the navigation display methods provided in the embodiments of the present application can be achieved are detailed in the previous embodiments and are not described herein.
Accordingly, the embodiment of the application also provides a vehicle 400, and the vehicle 400 may include a vehicle-mounted communication box and other devices. Referring to fig. 4, fig. 4 is a schematic diagram of a first structure of a vehicle according to an embodiment of the present application. The vehicle 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the vehicle 400, connects various parts of the entire vehicle using various interfaces and lines, and performs various functions of the vehicle and processes data by running or calling computer programs stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the vehicle.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by running the computer programs and modules stored in the memory 402. The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a computer program required for at least one function, and the like; the storage data area may store data created according to the use of the vehicle, etc.
In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
In this embodiment, the processor 401 in the vehicle 400 loads the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the following steps, and the processor 401 executes the computer programs stored in the memory 402, so as to implement various functions, as follows:
acquiring a first image, wherein the first image comprises at least one type of scene information around a vehicle;
Acquiring map information of a vehicle, and mapping the map information into a first image to obtain a target image;
acquiring a target human eye position of a driver, and determining a target display area of a front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window;
and controlling the head-up display device to project scene information and map information in the target image to the target display area.
In some embodiments, referring to fig. 5, fig. 5 is a schematic diagram of a second structure of a vehicle according to an embodiment of the present application. The vehicle 400 may include: processor 401, memory 402, display 403, camera assembly 404, audio circuit 405, sensor 406, and power supply 407. The processor 401 is electrically connected to the display 403, the camera module 404, the audio circuit 405, the sensor 406, and the power supply 407, respectively.
The display screen 403 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the vehicle, which may be composed of images, text, icons, video, and any combination thereof.
The camera assembly 404 may include image processing circuitry, which may be implemented using hardware and/or software components, and may include various processing units defining an image signal processing (Image Signal Processing) pipeline. The image processing circuit may include at least: a plurality of cameras, an image signal processor (Image Signal Processor, ISP processor), a control logic, an image memory, and the like. Wherein each camera may comprise at least one or more lenses and an image sensor. The image sensor may include an array of color filters (e.g., bayer filters). The image sensor may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor and provide a set of raw image data that may be processed by an image signal processor.
The audio circuit 405 may be used to provide an audio interface between the user and the vehicle through a speaker, microphone. Wherein the audio circuit 405 comprises a microphone. The microphone is electrically connected to the processor 401. The microphone is used for receiving voice information input by a user.
The sensor 406 is used to collect information of the vehicle itself or information of a user or external environment information. For example, the sensor 406 may include one or more of a vibration sensor, a temperature sensor, a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, a gesture sensor, a barometer, a heart rate sensor, and the like.
The power supply 407 is used to power the various components of the vehicle 400. In some embodiments, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
For the navigation display device of the embodiment of the present application, each functional module may be integrated in one processing chip, or each module may exist alone physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated module may also be stored in a computer readable storage medium if implemented in the form of a software functional module and sold or used as a stand alone product.
The navigation display method, the navigation display device, the vehicle and the storage medium provided by the embodiment of the application are described in detail. The principles and embodiments of the present application are described herein with specific examples, the above examples being provided only to assist in understanding the methods of the present application and their core ideas; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (11)

1. A navigation display method, characterized by being applied to a vehicle including a head-up display device and a front window, the method comprising:
Acquiring a first image, wherein the first image comprises at least one type of scene information around the vehicle;
acquiring map information of the vehicle, and mapping the map information into the first image to obtain a target image;
acquiring a target human eye position of a driver, and determining a target display area of the front window based on the target human eye position, wherein the human eye position of the driver has a corresponding position relationship with the display area of the front window;
and controlling the head-up display device to project the scene information and the map information in the target image to the target display area.
2. The navigation display method of claim 1, wherein the acquiring a first image, wherein the first image includes at least one type of scene information around the vehicle, comprises:
acquiring an image to be processed corresponding to a scene around the vehicle;
and carrying out image segmentation processing on the image to be processed based on the target pixel label to obtain the first image carrying at least one kind of scene information.
3. The navigation display method according to claim 2, wherein the acquiring map information of the vehicle and mapping the map information into the first image to obtain the target image includes:
Acquiring a second image carrying map information of the vehicle;
acquiring the position of a first source corresponding to the first image and the position of a second source corresponding to the second image;
obtaining a first transformation matrix between the first image and the second image based on the location of the first source and the location of the second source;
and mapping the map information in the second image into the first image according to the first transformation matrix to obtain the target image.
4. A navigation display method according to claim 3, wherein after said mapping of said map information in said second image into said first image according to said first transformation matrix, said method further comprises:
taking the scene information and the map information in the target image as input data, and taking the position of the vehicle in a road as verification data;
training is carried out in a preset training model based on the input data and the verification data, and a target training result is obtained, wherein the target training result indicates the central position of the vehicle in the road along the direction perpendicular to the running direction of the vehicle.
5. A navigation display method according to claim 3, wherein after said mapping of said map information in said second image into said first image according to said first transformation matrix, said method further comprises:
obtaining a second transformation matrix between the target image and the image displayed in the display area of the front window based on the position of the first source and the position corresponding to the display area of the front window;
and mapping the scene information and the map information in the target image to a display area of the front window according to the second transformation matrix.
6. The navigation display method of claim 5, wherein prior to the obtaining the target human eye position of the driver and determining the target display area of the front window based on the target human eye position, the method further comprises:
acquiring face characteristic information of a driver and coordinate data corresponding to the face characteristic information;
determining the human eye position of a driver based on the human face characteristic information and the coordinate data corresponding to the human face characteristic information;
and obtaining a third transformation matrix between the human eye position and the position corresponding to the display area of the front window based on the human eye position and the position corresponding to the display area.
7. The navigation display method of claim 6, wherein the obtaining a target human eye position of the driver and determining the target display area of the front window based on the target human eye position comprises:
obtaining a target human eye position of a driver;
determining a target display area of the front window based on the target human eye position and the third transformation matrix;
and mapping the scene information and the map information in the target image to the target display area based on the target display area of the front window and the second transformation matrix.
8. The navigation display method according to any one of claims 1 to 7, characterized in that the method further comprises:
determining the position of the scene information in the target display area;
and controlling the head-up display equipment to project the prompt information to a position corresponding to the scene information and/or to project the scene information after the marking process to the target display area.
9. A navigation display apparatus, characterized by being applied to a vehicle including a head-up display device and a front window, the apparatus comprising:
a first acquisition module for acquiring a first image, wherein the first image comprises at least one type of scene information around the vehicle;
The second acquisition module is used for acquiring map information of the vehicle and mapping the map information into the first image to obtain a target image;
the third acquisition module is used for acquiring the target human eye position of the driver and determining the target display area of the front window based on the target human eye position, wherein the human eye position of the driver and the display area of the front window have a corresponding position relationship;
and the control module is used for controlling the head-up display device to project the scene information and the map information in the target image to the target display area.
10. A vehicle, characterized in that the vehicle comprises:
a memory for storing executable program code;
a processor for calling and running the executable program code from the memory, causing the vehicle to perform the navigation display method of any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed, implements the navigation display method according to any one of claims 1 to 8.
CN202311699724.5A 2023-12-11 2023-12-11 Navigation display method, device, vehicle and storage medium Pending CN117762365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311699724.5A CN117762365A (en) 2023-12-11 2023-12-11 Navigation display method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311699724.5A CN117762365A (en) 2023-12-11 2023-12-11 Navigation display method, device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117762365A true CN117762365A (en) 2024-03-26

Family

ID=90309880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311699724.5A Pending CN117762365A (en) 2023-12-11 2023-12-11 Navigation display method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117762365A (en)

Similar Documents

Publication Publication Date Title
CN107554425B (en) A kind of vehicle-mounted head-up display AR-HUD of augmented reality
US8536995B2 (en) Information display apparatus and information display method
US10497174B2 (en) Method and device for augmented depiction
CN110060297B (en) Information processing apparatus, information processing system, information processing method, and storage medium
US9563981B2 (en) Information processing apparatus, information processing method, and program
WO2018167966A1 (en) Ar display device and ar display method
WO2015174050A1 (en) Display device and display method
US20050134479A1 (en) Vehicle display system
US11525694B2 (en) Superimposed-image display device and computer program
Langner et al. Traffic awareness driver assistance based on stereovision, eye-tracking, and head-up display
US20220358840A1 (en) Motor Vehicle
US11100718B2 (en) Method for operating a display device in a motor vehicle
US20190064528A1 (en) Information processing device, information processing method, and program
JP2012247847A (en) Information transmission control device for vehicle and information transmission control device
CN109074685B (en) Method, apparatus, system, and computer-readable storage medium for adjusting image
JP2019040634A (en) Image display device, image display method and image display control program
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
WO2019172333A1 (en) Parking support device
JP2019532540A (en) Method for supporting a driver of a power vehicle when driving the power vehicle, a driver support system, and the power vehicle
CN112484743B (en) Vehicle-mounted HUD fusion live-action navigation display method and system thereof
US9846819B2 (en) Map image display device, navigation device, and map image display method
EP3666578B1 (en) Vehicle display device
US11828947B2 (en) Vehicle and control method thereof
CN117762365A (en) Navigation display method, device, vehicle and storage medium
US10650601B2 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination