CN109050401B - Augmented reality driving display method and device - Google Patents

Augmented reality driving display method and device Download PDF

Info

Publication number
CN109050401B
CN109050401B CN201810742667.7A CN201810742667A CN109050401B CN 109050401 B CN109050401 B CN 109050401B CN 201810742667 A CN201810742667 A CN 201810742667A CN 109050401 B CN109050401 B CN 109050401B
Authority
CN
China
Prior art keywords
image data
information
vehicle
spatial information
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810742667.7A
Other languages
Chinese (zh)
Other versions
CN109050401A (en
Inventor
凌霄
谢启宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201810742667.7A priority Critical patent/CN109050401B/en
Publication of CN109050401A publication Critical patent/CN109050401A/en
Application granted granted Critical
Publication of CN109050401B publication Critical patent/CN109050401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a driving display method and device for augmented reality, wherein the method comprises the following steps: receiving image data of the surrounding environment of the vehicle sent by a sensor assembly and corresponding spatial information; if the image data has the configured characteristic region, extracting the image data and the spatial information corresponding to the existing characteristic region; acquiring current vehicle information, and determining image data of a feature region to be displayed in the extracted feature region according to the acquired vehicle information; generating a virtual picture from image data of a feature region to be displayed; and displaying the virtual picture in an overlapping manner according to the spatial information of the image data corresponding to the virtual picture. The method can truly display effective driving information and improve driving experience and driving safety.

Description

Augmented reality driving display method and device
Technical Field
The invention relates to the field of intelligent driving, in particular to a driving display method and device for augmented reality.
Background
Existing head-up display devices (HUD) or AR glasses can output various vehicle information in the form of graphics, numbers, characters, or the like after being processed and converted by a computer.
The HUD image display relies on a projection element mounted in front of the console to project the received information onto the glass. Meanwhile, a control panel is attached, so that the output image can be adjusted or changed.
In such a HUD, it is known that the position of the display image can be changed by controlling the projection original, and the driver can be made to assume that the virtual image of the display screen is not formed on the windshield but is positioned further forward than the windshield.
Existing AR glasses detect ambient depth information through a single or multiple sensors and superimpose the information onto a display.
The pictures of navigation warning information and the like of a driving display system on the market are not based on the actual environment, the target of combining virtuality and reality is not achieved, the real user environment cannot be provided, so that the user cannot timely process the pictures, and the usability is poor.
Disclosure of Invention
In view of this, the present application provides an augmented reality driving display method, which can truly display effective driving information and improve driving experience and driving safety.
In order to solve the technical problem, the technical scheme of the application is realized as follows:
an augmented reality driving display method applied to a driving display device on a vehicle, the method comprising:
receiving image data of the surrounding environment of the vehicle sent by a sensor assembly and corresponding spatial information;
if the image data has the configured characteristic region, extracting the image data and the spatial information corresponding to the existing characteristic region;
acquiring current vehicle information, and determining image data of a feature region to be displayed in the extracted feature region according to the acquired vehicle information;
generating a virtual picture from image data of a feature region to be displayed; and displaying the virtual picture in an overlapping manner according to the spatial information of the image data corresponding to the virtual picture.
An augmented reality driving display device, the device comprising: the device comprises a receiving unit, a determining unit, an acquiring unit, a generating unit and a display unit;
the receiving unit is used for receiving the image data of the surrounding environment of the vehicle sent by the sensor assembly and the corresponding spatial information;
the determining unit is used for determining whether the configured characteristic region exists in the image data received by the receiving unit; determining image data of a feature region to be displayed in the extracted feature region according to the vehicle information acquired by the acquisition unit;
the acquiring unit is used for extracting image data and spatial information corresponding to the existing characteristic region when the determining unit determines that the configured characteristic region exists in the image data received by the receiving unit; the current vehicle information is acquired and,
the generating unit is used for generating a virtual picture from the image data of the characteristic region to be displayed, which is determined by the determining unit;
and the display unit is used for displaying the virtual picture in a superposition manner according to the spatial information of the image data corresponding to the virtual picture generated by the generation unit.
According to the technical scheme, the surrounding environment of the vehicle is detected by using the sensor assembly, and image data and spatial information are returned; and determining image data corresponding to the characteristic region to be displayed according to the current vehicle information, generating a virtual picture, and displaying the virtual picture in a superposition mode according to the corresponding spatial information. The scheme can truly display effective driving information and improve driving experience and driving safety.
Drawings
Fig. 1 is a schematic view of a driving display process for implementing augmented reality in an embodiment of the present application;
FIG. 2 is a schematic diagram of a display in an embodiment of the present application, in which a blind area is covered by a pedestrian;
FIG. 3 is a schematic diagram of a speed-per-hour over-limit guideboard according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a guideboard display with a three-way intersection in the embodiment of the present application;
fig. 5 is a schematic view of a driving display process for implementing augmented reality in an embodiment of the present application;
FIG. 6 is a schematic view of a display guideboard in heavy rain in the embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus applied to the above-described technology in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the technical solutions of the present invention are described in detail below with reference to the accompanying drawings and examples.
The embodiment of the application provides an augmented reality driving display method, which is applied to a driving display device on a vehicle, detects the surrounding environment of the vehicle by using a sensor assembly, and returns image data and spatial information; and determining image data corresponding to the characteristic region to be displayed according to the current vehicle information, generating a virtual picture, and displaying the virtual picture in a superposition mode according to the corresponding spatial information. The scheme can truly display effective driving information and improve driving experience and driving safety.
The driving display device in the embodiment of the present application may be, but is not limited to, HUD, or AR glasses.
The sensor assembly is used to acquire image data and spatial information of the vehicle surroundings, as well as GPS information.
The sensor component at least comprises a GPS positioning device, a photographing device and a spatial information acquisition device, and if the photographing device can acquire spatial information at the same time, the photographing device and the spatial information acquisition device can be combined into one device.
The GPS positioning device is used for acquiring GPS information;
the photographing device herein is not limited, and may be a monocular camera, a binocular camera, a structured light depth camera, a TOF depth camera, and the like, for example. A user of the photographing device obtains image data;
the spatial information acquisition device is not limited, such as a laser radar, a millimeter wave radar, a range finder, and the like; for obtaining spatial information.
The origin of the spatial information is defined as the optical center position of the photographing device when the vehicle is started for the first time. The spatial coordinates corresponding to the surrounding environment acquired during the running of the vehicle refer to three-dimensional coordinates relative to the origin.
Example one
The following describes in detail a driving display process of the present application for realizing augmented reality in a good environment of a vehicle.
Referring to fig. 1, fig. 1 is a schematic view of a driving display process for implementing augmented reality in an embodiment of the present application. The method comprises the following specific steps:
step 101, the driving display device receives image data of the vehicle surroundings sent by the sensor assembly and corresponding spatial information.
And 102, if the image data is determined to have the configured characteristic region, extracting the image data and the spatial information corresponding to the existing characteristic region.
The characteristic area is an area related to driving and can be configured according to actual requirements, such as road edges, zebra crossings, guideboards and the like.
In a specific implementation of the present application, after extracting image data and spatial information corresponding to the existing feature region, the method further includes:
obtaining map information; wherein the map information is composed of information of a feature area;
and updating the image data and the spatial information corresponding to the determined characteristic region into the map information.
When the image data of the vehicle surroundings transmitted by the sensor assembly and the corresponding spatial information are received, the method further includes:
receiving GPS information of corresponding image data sent by the sensor assembly;
the information of the feature area includes: spatial information of the feature region, image data, and GPS information.
And 103, acquiring current vehicle information, and determining image data of the characteristic region to be displayed in the extracted characteristic region according to the acquired vehicle information.
The vehicle information in the embodiment of the present application includes: data on a CAN bus inside the vehicle and road condition information acquired through a navigation map.
Data on the CAN bus comprises but is not limited to vehicle speed, oil consumption, residual oil quantity, mileage, blind zone detection data and the like;
the road condition information includes, but is not limited to, a starting position, a destination position, a number of traveled kilometers, a number of distance to a destination, a forward warning condition (construction, traffic accident, jam, speed limit, etc.), a forward route condition (what road is currently on, what branches are ahead, what roads are respectively accessed to), and the like.
104, generating a virtual picture by the image data of the characteristic area to be displayed by the driving display device; and displaying the virtual picture in an overlapping manner according to the spatial information of the image data corresponding to the virtual picture.
The virtual picture corresponding to the image data of the characteristic area is displayed in an overlapping mode, whether the content needing to be displayed exists in the vehicle information or not can be specifically set according to specific application, and the content can be processed according to the existing implementation.
The following provides an implementation process in a specific application scenario of the present application with reference to the accompanying drawings.
The driving display device receives the image data of the surrounding environment of the vehicle sent by the sensor assembly and the corresponding spatial information.
If the image data is determined to have the configured characteristic region, extracting the image data and the spatial information corresponding to the existing characteristic region; otherwise, the current processing is finished, that is, no information needing to be displayed in an overlapping manner exists.
Here, the characteristic area is, for example, a road edge, a zebra crossing, a road sign, etc.
The driving display device acquires current vehicle information, and determines image data of a feature region to be displayed in the extracted feature region according to the acquired vehicle information.
Assume that the acquired vehicle information is: and if the blind area on the right side has pedestrians to pass through, or the speed limit of the current road section is 80KM/H, or no guideboard exists, and a three-way intersection is arranged in front of the blind area, determining a corresponding characteristic area: and if the image data corresponding to the road edge and the guideboard area need to be displayed, generating a virtual image from the corresponding image data, and displaying the virtual display position in an overlapping manner according to the corresponding spatial information.
Referring to fig. 2, fig. 2 is a display schematic diagram of a blind area through which a pedestrian passes in the embodiment of the present application. The left image in fig. 2 is a scene diagram in an actual environment, and a pedestrian passes through a blind area on the right side of the vehicle; the image on the right side is the image which is displayed in an overlapping mode by adopting the technical scheme provided by the embodiment of the application, so that the driver can see the pedestrians passing through the blind area, and the driving safety is improved.
Referring to fig. 3, fig. 3 is a display schematic diagram of the speed-per-hour over-limit guideboard in the embodiment of the present application. In fig. 3, the front right side of the vehicle is not displayed due to the breakage of the guideboard or the existence of the guideboard, and the current road section is determined to be speed-limited according to the vehicle information, so that a virtual guideboard is generated, for example, the speed limit 80KM/H displayed on the right side of the road is virtually displayed in fig. 3, and the visual perception of the virtual guideboard is that the speed-limited guideboard is on the roadside.
Referring to fig. 4, fig. 4 is a schematic diagram of a guideboard display with a three-way intersection in the embodiment of the present application. FIG. 4 is a left image of a three-way intersection in front of a road where a vehicle is traveling, and no sign is displayed above the three-way intersection; an indication signpost (road name and direction) is virtually displayed on the right side of fig. 4, giving a visual sensation of road indication with a three-way intersection.
Example two
In the following, with reference to the accompanying drawings, how to perform a driving display process for realizing augmented reality under severe environmental conditions, such as rainy and snowy weather, is described.
Referring to fig. 5, fig. 5 is a schematic view of a driving display flow for implementing augmented reality in the second embodiment of the present application. The method comprises the following specific steps:
step 501, when the driving display device does not receive the image data of the surrounding environment of the vehicle and the corresponding spatial information, or the received image data fails to meet a preset requirement, using the received GPS data sent by the sensor assembly to search the corresponding image data and spatial information in the map information.
The map information is acquired and stored when the environmental conditions are good.
Step 502, the driving display device obtains current vehicle information, and determines the searched image data and spatial information to be displayed according to the vehicle information.
The vehicle information in the embodiment of the present application includes: data on a CAN bus inside the vehicle and road condition information acquired through a navigation map.
Data on the CAN bus comprises but is not limited to vehicle speed, oil consumption, residual oil quantity, mileage, blind zone detection data and the like;
the road condition information includes, but is not limited to, a starting position, a destination position, a number of traveled kilometers, a number of distance to a destination, a forward warning condition (construction, traffic accident, jam, speed limit, etc.), a forward route condition (what road is currently on, what branches are ahead, what roads are respectively accessed to), and the like.
Step 503, the driving display device generates a virtual picture from the image data to be displayed; and displaying the virtual picture in an overlapping manner according to the spatial information of the image data corresponding to the virtual picture.
The following is a detailed description with reference to specific examples:
in the driving process of a driver, in the current rainy environment, the function of a sensor assembly is severely limited, and the acquired image information is unclear or is not acquired at all, and then the corresponding image data and the corresponding space information are acquired in the stored map information according to the received GPS data.
And if the right side of the road sign is determined to have the road sign but cannot be seen clearly according to the vehicle information and needs to be displayed virtually, generating a virtual picture by using the acquired image data corresponding to the road sign, and displaying the virtual picture in a superposition manner according to the control information corresponding to the image data.
Referring to fig. 6, fig. 6 is a schematic view of a display guideboard in heavy rain in the embodiment of the present application. In fig. 6, the left side is actually observed traffic information, and there is no sign for limiting the speed of the current road section by 40 km; the right side is after using the technical scheme that this application provided, has shown the sign of speed limit 40 kilometers in vehicle right side place ahead. This can improve the safety of driving under a severe environment.
Based on the same inventive concept, the application also provides an augmented reality driving display device. Referring to fig. 7, fig. 7 is a schematic structural diagram of an apparatus applied to the above technology in the embodiment of the present application. The device includes: a receiving unit 701, a determining unit 702, an acquiring unit 703, a generating unit 704, and a display unit 705;
a receiving unit 701, configured to receive image data of the vehicle surroundings sent by a sensor assembly, and corresponding spatial information;
a determining unit 702 configured to determine whether there is a configured feature region in the image data received by the receiving unit 701; determining image data of a feature region to be displayed in the extracted feature region according to the vehicle information acquired by the acquisition unit 703;
an acquiring unit 703 configured to, when the determining unit 702 determines that the configured feature region exists in the image data received by the receiving unit 701, extract image data and spatial information corresponding to the existing feature region; the current vehicle information is acquired and,
a generating unit 704 configured to generate a virtual screen from the image data of the feature region to be displayed determined by the determining unit 702;
a display unit 705 for displaying the virtual screen in a superimposed manner according to the spatial information of the image data corresponding to the virtual screen generated by the generation unit 704.
Preferably, the first and second liquid crystal films are made of a polymer,
an obtaining unit 703, configured to obtain map information when the determining unit 702 determines that the image data and the spatial information corresponding to the existing feature area exist; wherein the map information is composed of information of a feature area; and updating the image data and the spatial information corresponding to the determined characteristic region into the map information.
Preferably, the first and second liquid crystal films are made of a polymer,
the receiving unit is further used for receiving the GPS information corresponding to the image data sent by the sensor assembly when the image data of the surrounding environment of the vehicle sent by the sensor assembly is received; wherein the information of the feature region includes: spatial information of the feature region, image data, and GPS information.
Preferably, the first and second liquid crystal films are made of a polymer,
the obtaining unit 703 is further configured to, when the image data of the environment around the vehicle and the corresponding spatial information are not received, or the received image data fails to meet a preset requirement, use the received GPS data sent by the sensor component to search the map information for the corresponding image data and spatial information; acquiring current vehicle information;
a determining unit 702, further configured to determine the searched image data and spatial information to be displayed according to the vehicle information;
a generating unit 704 for generating the image data to be displayed determined by the determining unit 702 into a virtual screen;
a display unit 705 for displaying the virtual screen in a superimposed manner according to the spatial information of the image data corresponding to the virtual screen generated by the generation unit 704.
Preferably, the first and second liquid crystal films are made of a polymer,
the vehicle information includes: data on a CAN bus inside the vehicle and road condition information acquired through a navigation map.
The units of the above embodiments may be integrated into one body, or may be separately deployed; may be combined into one unit or further divided into a plurality of sub-units.
To sum up, in order to enable the driving display system to display driving information in an overlapping manner according to the actual position of the road and enable the driver to still observe effective driving information in the environment of severe weather, the driving display system can use the sensor combination to detect the spatial information of the characteristic area in the environment, generate a virtual prompt picture at the actual spatial position according to the real-time vehicle information and display the virtual prompt picture on the HUD or AR glasses in an overlapping manner.
The method comprises the steps of constructing a road characteristic region detection module through a sensor combination for environment perception, establishing a 3D space relation among a driver, an automobile and a road, calculating coordinate information of a characteristic region through saving and searching a local map, setting a virtual picture according to real-time information of a vehicle information module, and adjusting a virtual world of a graphic engine of each projection array or AR glasses and finally displaying the picture by a display adjusting module according to parameters of a HUD. The effective important driving information can be displayed more truly, the driving of a driver can be well adapted, and the driving experience and the driving safety of the driver are improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. An augmented reality driving display method applied to a driving display device on a vehicle, the method comprising:
receiving image data of the surrounding environment of the vehicle sent by a sensor assembly and corresponding spatial information;
if the image data has the configured characteristic region, extracting the image data and the spatial information corresponding to the existing characteristic region;
acquiring current vehicle information, and determining image data of a feature region to be displayed in the extracted feature region according to the acquired vehicle information;
generating a virtual picture from image data of a feature region to be displayed; superposing and displaying the virtual picture according to the spatial information of the image data corresponding to the virtual picture;
wherein the method further comprises:
when the image data of the surrounding environment of the vehicle and the corresponding spatial information are not received or the received image data fail to meet the preset requirements, searching the corresponding image data and the corresponding spatial information in the map information by using the received GPS data sent by the sensor assembly;
acquiring current vehicle information, and determining searched image data and spatial information to be displayed according to the vehicle information;
generating a virtual picture from image data to be displayed; and displaying the virtual picture in an overlapping manner according to the spatial information of the image data corresponding to the virtual picture.
2. The method according to claim 1, wherein when determining the image data and the spatial information corresponding to the existing feature region, the method further comprises:
obtaining map information; wherein the map information is composed of information of a feature area;
and updating the image data and the spatial information corresponding to the determined characteristic region into the map information.
3. The method of claim 2, wherein upon receiving the image data of the vehicle surroundings transmitted by the sensor assembly, and the corresponding spatial information, the method further comprises:
receiving GPS information of corresponding image data sent by the sensor assembly;
the information of the feature area includes: spatial information of the feature region, image data, and GPS information.
4. The method according to any one of claims 1 to 3,
the vehicle information includes: data on a CAN bus inside the vehicle and road condition information acquired through a navigation map.
5. The method according to any one of claims 1 to 3,
the driving display device is HUD or AR glasses;
when the display device is a HUD, displaying a virtual picture in a projection array;
and when the display device is the AR glasses, displaying the virtual picture in a virtual world of a graphic engine of the AR glasses.
6. An augmented reality driving display device, comprising: the device comprises a receiving unit, a determining unit, an acquiring unit, a generating unit and a display unit;
the receiving unit is used for receiving the image data of the surrounding environment of the vehicle sent by the sensor assembly and the corresponding spatial information;
the determining unit is used for determining whether the configured characteristic region exists in the image data received by the receiving unit; determining image data of a feature region to be displayed in the extracted feature region according to the vehicle information acquired by the acquisition unit;
the acquiring unit is used for extracting image data and spatial information corresponding to the existing characteristic region when the determining unit determines that the configured characteristic region exists in the image data received by the receiving unit; the current vehicle information is acquired and,
the generating unit is used for generating a virtual picture from the image data of the characteristic region to be displayed, which is determined by the determining unit;
the display unit is used for displaying the virtual picture in an overlapping mode according to the space information of the image data corresponding to the virtual picture generated by the generation unit;
the acquisition unit is further used for searching the corresponding image data and the corresponding spatial information in the map information by using the received GPS data sent by the sensor assembly when the image data of the surrounding environment of the vehicle and the corresponding spatial information are not received or the received image data fail to meet the preset requirement; acquiring current vehicle information;
the determining unit is further used for determining the searched image data and spatial information to be displayed according to the vehicle information;
the generating unit is used for generating the image data to be displayed determined by the determining unit into a virtual picture;
and the display unit is used for displaying the virtual picture in a superposition manner according to the spatial information of the image data corresponding to the virtual picture generated by the generation unit.
7. The apparatus of claim 6,
the obtaining unit is further used for obtaining map information when the determining unit determines that the image data and the spatial information corresponding to the existing characteristic region exist; wherein the map information is composed of information of a feature area; and updating the image data and the spatial information corresponding to the determined characteristic region into the map information.
8. The apparatus of claim 7,
the receiving unit is further used for receiving the GPS information corresponding to the image data sent by the sensor assembly when the image data of the surrounding environment of the vehicle sent by the sensor assembly is received; wherein the information of the feature region includes: spatial information of the feature region, image data, and GPS information.
9. The apparatus according to any one of claims 6 to 8,
the vehicle information includes: data on a CAN bus inside the vehicle and road condition information acquired through a navigation map.
CN201810742667.7A 2018-07-09 2018-07-09 Augmented reality driving display method and device Active CN109050401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810742667.7A CN109050401B (en) 2018-07-09 2018-07-09 Augmented reality driving display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810742667.7A CN109050401B (en) 2018-07-09 2018-07-09 Augmented reality driving display method and device

Publications (2)

Publication Number Publication Date
CN109050401A CN109050401A (en) 2018-12-21
CN109050401B true CN109050401B (en) 2020-08-21

Family

ID=64819575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810742667.7A Active CN109050401B (en) 2018-07-09 2018-07-09 Augmented reality driving display method and device

Country Status (1)

Country Link
CN (1) CN109050401B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819160A (en) * 2019-01-04 2019-05-28 惠州市凯越电子股份有限公司 Based on the visual auxiliary system video generation method of 360 panoramas of AI and Internet of Things driving and device
CN109883414B (en) * 2019-03-20 2021-08-27 百度在线网络技术(北京)有限公司 Vehicle navigation method and device, electronic equipment and storage medium
CN111076742A (en) 2019-12-17 2020-04-28 百度国际科技(深圳)有限公司 Display method and device of AR navigation, electronic equipment and storage medium
CN114326119A (en) * 2020-05-15 2022-04-12 华为技术有限公司 Head-up display device and head-up display method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103105174B (en) * 2013-01-29 2016-06-15 四川长虹佳华信息产品有限责任公司 A kind of vehicle-mounted outdoor scene safety navigation method based on AR augmented reality
CN103150759B (en) * 2013-03-05 2015-11-25 腾讯科技(深圳)有限公司 A kind of method and apparatus street view image being carried out to Dynamic contrast enhance
DE102013016244A1 (en) * 2013-10-01 2015-04-02 Daimler Ag Method and device for augmented presentation
CN105513389B (en) * 2015-11-30 2018-04-06 小米科技有限责任公司 The method and device of augmented reality
US10049499B2 (en) * 2016-08-29 2018-08-14 Toyota Jidosha Kabushiki Kaisha Method of ground adjustment for in-vehicle augmented reality systems
CN106355153B (en) * 2016-08-31 2019-10-18 上海星视度科技有限公司 A kind of virtual objects display methods, device and system based on augmented reality
CN108151759B (en) * 2017-10-31 2022-02-11 捷开通讯(深圳)有限公司 Navigation method, intelligent terminal and navigation server

Also Published As

Publication number Publication date
CN109050401A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109050401B (en) Augmented reality driving display method and device
CN107199948B (en) Vehicle display control unit
WO2019097763A1 (en) Superposed-image display device and computer program
US8536995B2 (en) Information display apparatus and information display method
US10488658B2 (en) Dynamic information system capable of providing reference information according to driving scenarios in real time
US11525694B2 (en) Superimposed-image display device and computer program
JP6415583B2 (en) Information display control system and information display control method
JP2015523624A (en) A method for generating a virtual display surface from a video image of a landscape based on a road
CN111279689B (en) Display system, display method, and storage medium
JP7476568B2 (en) Superimposed image display device, superimposed image drawing method, and computer program
US11972616B2 (en) Enhanced navigation instructions with landmarks under difficult driving conditions
JP2010234959A (en) Information display device
US10996469B2 (en) Method and apparatus for providing driving information of vehicle, and recording medium
WO2019038904A1 (en) Surrounding vehicle display method and surrounding vehicle display apparatus
CN113165510B (en) Display control device, method, and computer program
JP6186905B2 (en) In-vehicle display device and program
WO2021006060A1 (en) Display control device and display control program
US20240101138A1 (en) Display system
CN117162777B (en) Content presentation method, device, equipment and storage medium
CN109801355A (en) A kind of method and device expanding vision field of driver
JP2019087259A (en) Superposition image display device and computer program
US20240106989A1 (en) Vehicle display control device and non-transitory computer-readable medium
US20240042857A1 (en) Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program
JP2024069745A (en) Display System
WO2020121810A1 (en) Display control device, display control program, and tangible, non-transitory computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant