WO2019224922A1 - Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute - Google Patents

Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute Download PDF

Info

Publication number
WO2019224922A1
WO2019224922A1 PCT/JP2018/019695 JP2018019695W WO2019224922A1 WO 2019224922 A1 WO2019224922 A1 WO 2019224922A1 JP 2018019695 W JP2018019695 W JP 2018019695W WO 2019224922 A1 WO2019224922 A1 WO 2019224922A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
unit
driver
head
image information
Prior art date
Application number
PCT/JP2018/019695
Other languages
English (en)
Japanese (ja)
Inventor
脩平 太田
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/019695 priority Critical patent/WO2019224922A1/fr
Publication of WO2019224922A1 publication Critical patent/WO2019224922A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments

Definitions

  • the present invention relates to a head-up display control device that controls a head-up display device for a vehicle, a head-up display system that includes a head-up display device, a head-up display control device, and a head-up display control that controls the head-up display device. It is about the method.
  • a vehicle HUD (Head Up Display) device is a display device that allows a driver to visually recognize image information without greatly moving his / her line of sight from the front visual field.
  • an AR-HUD device using AR is more intuitive than an existing HUD device by superimposing and displaying image information such as a road guide arrow on a real object such as a road.
  • Information can be presented to the driver in an easy-to-understand manner (see, for example, Patent Document 1).
  • the AR-HUD device described in Patent Document 1 has a configuration in which an image displayed on an image display device such as a projector or a liquid crystal display is reflected by a mirror and projected onto a windshield of a vehicle.
  • the driver views the image projected as a virtual image in front of the transparent windshield by viewing the image projected on the windshield.
  • a display visual recognition area where the driver can visually recognize the virtual image needs to exist within the recommended display area where the driver can comfortably visually recognize the virtual image. Since the driver's eye height and the look-down angle with respect to the virtual image are different for each position of the driver's eyes, the display viewing area is also different for each position of the driver's eyes. Therefore, in order for the driver to visually recognize a virtual image regardless of the position of the driver's eyes, the AR-HUD device needs to enlarge the display visual recognition area by increasing the virtual image. However, in order to enlarge the virtual image, it is necessary to increase the size of the video display device, the mirror, and the like. As a result, there is a problem that the AR-HUD device is increased in size. Since the space on the vehicle side where the AR-HUD device is installed is limited, it is not preferable to increase the size of the AR-HUD device.
  • the present invention has been made in order to solve the above-described problems.
  • a virtual image is superimposed on a real object to the driver regardless of the position of the driver's eyes without increasing the size of the head-up display device.
  • the purpose is to make the state visible.
  • a head-up display control device includes a display unit that displays image information, and a reflection mirror that reflects the image information displayed by the display unit and projects the image information onto a projection surface, and is visually recognized by a driver.
  • a head-up display control device that controls a head-up display device that superimposes and displays image information as a virtual image on the foreground of a vehicle, an eye position detection unit that detects the position of the driver's eyes, and image information that is displayed on the display unit
  • the image generation unit that generates the image and the superimposed display that displays the image information generated by the image generation unit as a virtual image superimposed on the real object in the foreground of the vehicle according to the position of the driver's eyes detected by the eye position detection unit And an area changing unit for changing the area.
  • the superimposition display area for superimposing and displaying the image information as a virtual image on the real object in the foreground of the vehicle is changed according to the position of the eyes of the driver. Instead, regardless of the position of the driver's eyes, the driver can visually recognize the virtual image superimposed on the real object.
  • FIG. It is a block diagram which shows the principal part of the HUD system which concerns on Embodiment 1.
  • FIG. It is a block diagram at the time of vehicle mounting of the HUD system which concerns on Embodiment 1.
  • FIG. It is a figure explaining the difference of the display visual recognition area according to the position of the height direction of a driver
  • FIG. 4 is a reference example for helping understanding of the HUD system according to the first embodiment, in accordance with the position in the height direction of the driver's eyes when the height direction of the virtual image is increased as compared with FIG. 3. It is a figure explaining the difference in the display visual recognition area. It is a reference example for helping understanding of the HUD system according to the first embodiment, and a driver whose eye position in the height direction is high when the size of the virtual image in the height direction is larger than that in FIG. It is a figure which shows the state which visually recognized the vehicle foreground. It is a reference example for helping understanding of the HUD system according to the first embodiment, and a driver whose eye position in the height direction is low when the size of the virtual image in the height direction is larger than that in FIG.
  • FIG. 8 is a diagram illustrating a change example of the superimposed display area according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a high eye position in the height direction.
  • it is a figure which shows the state which the driver
  • it is a figure which shows the state which the driver
  • FIG. 8 is a diagram illustrating a change example of the superimposed display area according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a low eye position in the height direction.
  • it is a figure which shows the state which the driver
  • it is a figure which shows the state which the driver
  • 3 is a flowchart illustrating an operation example of the HUD control device according to the first embodiment.
  • 6 is a diagram illustrating an example of eye position determination performed by an eye position detection unit according to Embodiment 1.
  • FIG. 10 is a block diagram illustrating a main part of a HUD system according to a third embodiment.
  • FIG. 10 is a configuration diagram when the HUD system according to Embodiment 3 is mounted on a vehicle.
  • it is a figure which shows the correspondence of the tilt angle of a reflective mirror, and the position of a virtual image.
  • 12 is a flowchart illustrating an operation example of the HUD control device according to the third embodiment.
  • FIG. 10 is a flowchart illustrating an operation example of the HUD control device according to the fourth embodiment. It is a figure which shows the hardware structural example of the HUD control apparatus which concerns on each embodiment. It is a figure which shows the hardware structural example of the HUD control apparatus which concerns on each embodiment.
  • FIG. 1 is a block diagram showing a main part of the HUD system 4 according to the first embodiment.
  • FIG. 2 is a configuration diagram of the HUD system 4 according to Embodiment 1 when mounted on a vehicle.
  • a vehicle 1 is equipped with a HUD system 4 including a HUD control device 2 and a HUD device 3, and an in-vehicle device 5.
  • the HUD device 3 includes a display unit 31 and a reflection mirror 32.
  • the display unit 31 displays image information generated by the HUD control device 2.
  • a display such as a liquid crystal display, a projector, or a laser light source is used.
  • the reflection mirror 32 reflects the display light of the image information displayed by the display unit 31 and projects it onto the windshield 300.
  • the driver visually recognizes the display object 201 of the virtual image 200 that is coupled through the windshield 300 from the position of the eye 100.
  • the windshield 300 is a projection surface of the virtual image 200.
  • the projected surface is not limited to the windshield 300, and may be a half mirror called a combiner.
  • the HUD control device 2 includes an eye position detection unit 21, a region change unit 22, an image generation unit 23, and a database 24.
  • the eye position detection unit 21 acquires captured image information of a driver captured by an in-vehicle camera 51 described later, analyzes the acquired captured image information, and detects the position of the driver's eye 100 in the height direction.
  • the eye position detection unit 21 may detect the positions of the left eye and the right eye of the driver as the position of the driver's eye 100, or may detect the center positions of the left eye and the right eye. Further, the eye position detection unit 21 may estimate the center positions of the left eye and the right eye from the driver's face position in the captured image information.
  • the area changing unit 22 changes the superimposed display area according to the position of the driver's eye 100 detected by the eye position detecting unit 21. Specifically, the area changing unit 22 calculates a display visual recognition area based on the position of the driver's eye 100 and the position of the virtual image 200. Further, the region changing unit 22 specifies a display recommended region corresponding to the display object 201 of the virtual image 200, and changes the superimposed display region based on the display visual recognition region and the display recommended region. The area changing unit 22 also changes the non-superimposed display area based on the display visual recognition area and the display recommended area.
  • FIG. 3 is a diagram for explaining a difference between the display visual recognition areas 401H, 401M, and 401L corresponding to the positions 100H, 100M, and 100L in the height direction of the driver's eyes.
  • the display visual recognition area, the display recommendation area, the superimposed display area, and the non-superimposed display area are assumed to be depth distances from the driver's eyes to the front, with the driver's eyes positioned at 0 m. expressed.
  • the display visual recognition area 401 is an area where the driver can visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the vehicle foreground, and differs depending on the position of the driver's eyes in the height direction.
  • the position in the height direction of the driver's eyes is divided into three stages of high (H), medium (M), and low (L), and the high eye position 100H is medium. It is called the eye position 100M and the low eye position 100L.
  • the high eye position 100H is 1.46 m
  • the medium eye position 100M is 1.4 m
  • the low eye position 100L is 1.34 m.
  • the display viewing area 401H is 18 m at the high eye position 100H.
  • the display visual recognition area 401M is 20m to 100m at the medium eye position 100M to 56m, and the display visual recognition area 401L is 23m to 670m at the low eye position 100L.
  • These display visual recognition areas 401H, 401M, and 401L are calculated based on trigonometric functions.
  • the display recommendation area 402 is an area where the driver can comfortably visually recognize the display object 201 of the virtual image 200.
  • This display recommendation area 402 is predetermined according to the display object 201 of the virtual image 200. For example, when the image information that guides the vehicle 1 to turn left at an intersection 75 m deep is displayed as the display object 201 of the virtual image 200, the depth 0 m to 100 m is the recommended display area 402.
  • the display visual recognition area 401 includes the display recommendation area 402.
  • the display visual recognition areas 401H, 401M, and 401L that differ depending on the driver's eye positions 100H, 100M, and 100L do not always include the display recommendation areas 402 that differ depending on the display object 201. Therefore, in the first embodiment, the area changing unit 22 handles the display recommendation area 402 by dividing it into a superimposed display area included in the display visual recognition area 401 and a non-superimposed display area not included in the display visual recognition area.
  • the superimposed display area 403 is an area in which the driver can visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the foreground of the vehicle.
  • the superimposed display area 403 is an area where the display visual recognition area 401 and the recommended display area 402 overlap. is there.
  • the superimposed display area 403 is 20 m to 100 m in depth. is there.
  • the non-overlapping display area 404 is an area where the driver cannot visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the foreground of the vehicle.
  • the display visual recognition area 401 and the recommended display area 402 do not overlap. It is an area. Note that the non-superimposed display area 404 cannot be visually recognized when the driver superimposes the display object 201 of the virtual image 200 on the real object in the vehicle foreground, but can be visually recognized when superimposed on the vehicle foreground.
  • the non-superimposed display area 404M is 0 m to 20 m in depth. It is.
  • FIG. 4A is a diagram showing a state in which a driver with a medium eye position in the height direction visually recognizes the vehicle foreground.
  • FIG. 4B is a diagram illustrating a state where a driver with a high eye position in the height direction visually recognizes the vehicle foreground.
  • FIG. 4C is a diagram illustrating a state where a driver with a low eye position in the height direction visually recognizes the vehicle foreground.
  • 4A, 4B, and 4C are reference examples for helping understanding of the superimposed display areas 403H, 403M, and 403L (FIG. 6A and the like described later) of the first embodiment, and FIG. 4A, FIG. 4B, and FIG.
  • the superimposed display area 403 in 4C is the same regardless of the driver's eye positions 100H, 100M, and 100L.
  • FIG. 5A is a reference example for helping understanding of the HUD system 4 according to the first embodiment.
  • FIG. 5B is a reference example for helping understanding of the HUD system 4 according to the first embodiment.
  • FIG. 5C is a reference example for helping understanding of the HUD system 4 according to the first embodiment.
  • the size of the virtual image 200a in the height direction is larger than that in FIG. It is a figure which shows the state which the driver
  • the display visual recognition areas 401H, 401M, and 401L are different according to the positions 100H, 100M, and 100L of the driver's eyes, and the superimposed display area 403 is also different according to the display visual recognition areas 401H, 401M, and 401L.
  • the difference in the superimposed display area 403 corresponding to the eye positions 100H, 100M, and 100L is not considered. Therefore, for example, as shown in FIG. 4A, the driver at the middle eye position 100M can visually recognize the entire superimposed display area 403 in the display visual recognition area 401M, while the driver's eyes can be seen as shown in FIG. 4B.
  • the superimposed display area 403 When the position 100H is high, the superimposed display area 403 does not fit in the display visual recognition area 401H, and there is a superimposed display impossible area 410 in which the display object 201 of the virtual image 200 cannot be superimposed and displayed on the real object in the vehicle foreground. Similarly, as shown in FIG. 4C, the superimposed display area 403 does not fit in the display visual recognition area 401L even at the position 100L of the driver's eyes, and the superimposed display cannot display the display object 201 of the virtual image 200 on the real object in the vehicle foreground. A disabled area 410 exists.
  • the size of the virtual image 200a in the height direction is illustrated as in FIG. 5A of the reference example. 3 is required to be larger than the virtual image 200.
  • the display visual recognition area 401H is expanded from the depth of 18 m to 56 m to the depth of 18 m to 100 m, and the display visual recognition area 401L is changed from the depth of 23 m to 670 m.
  • the depth extends from 20m to 670m. That is, as shown in FIG.
  • the display visual recognition area 202 for the driver whose eye position 100 ⁇ / b> H in the height direction is high is added to the upper end of the virtual image 200.
  • a display visual recognition area 203 for the driver whose eye position 100L in the height direction is low is added to the lower end of the virtual image 200.
  • the display visual recognition area 203 for the driver having a low eye position 100L which is added to the lower end of the virtual image 200, is an area unnecessary for a driver having a high eye position 100H.
  • the display visual recognition area 202 for the driver with the high eye position 100H added to the upper end of the virtual image 200 is an area unnecessary for the driver with the low eye position 100L.
  • the size of the virtual images 200, 200a in the height direction is increased from 0.28 m to 0.38 m, which is 1.3 times or more just by changing the position of the eye by ⁇ 0.06 m in the height direction.
  • the vehicle-side space such as a dashboard in which the HUD device 3 is installed is limited, it is not preferable to increase the size of the HUD device 3.
  • the region changing unit 22 changes the superimposed display region 403 according to the position of the driver's eye 100, so that the position of the eye can be adjusted without increasing the size of the HUD device 3. Make it possible for different drivers to comfortably see the virtual image.
  • FIG. 6A is a diagram illustrating a modification example of the superimposed display region 403 according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a high eye position in the height direction.
  • the recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the height direction.
  • the area changing unit 22 calculates the display visual recognition area 401H for the driver at the high eye position 100H as a depth of 18 m to 56 m.
  • the area changing unit 22 calculates the superimposed display area 403H for the driver at the high eye position 100H as the depth of 18 m to 56 m, and calculates the non-superimposed display area 404H as the depth of 0 m to 18 m and 56 m to 100 m.
  • FIG. 6B is a diagram illustrating a state in which the driver with a medium eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 to be described later displays, on the HUD device 3, image information that guides the vehicle 1 turning left at an intersection 75 m in depth as guidance display for the driver at the middle eye position 100M.
  • the display object 201 of the virtual image 200 is superimposed and displayed on the intersection which is the previous real object.
  • FIG. 6C is a diagram illustrating a state in which the driver with a high eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 to be described later displays, as guidance information for the driver at the high eye position 100H, when the HUD device 3 displays image information for guiding the vehicle 1 to turn left at an intersection 75 m ahead. Since the intersection which is a real object exists in the non-superimposed display area 404H, the display object 201 is superimposed on the foreground of the vehicle 1 without being superimposed on the intersection. It is assumed that the display position of the display object 201 at that time is predetermined. In the example of FIG. 6C, the display object 201 is displayed below the virtual image 200.
  • the image generation unit 23 displays the virtual image 200 on the intersection that is a real object as illustrated in FIG. 6B.
  • the display object 201 is displayed in a superimposed manner.
  • FIG. 7A is a diagram illustrating a modification example of the superimposed display region 403 according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a low eye position in the height direction.
  • the recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the height direction.
  • the area changing unit 22 calculates the display visual recognition area 401L for the driver at the low eye position 100L as the depth of 23 m to 670 m.
  • the region changing unit 22 calculates the superimposed display region 403L for the driver at the low eye position 100L as a depth of 23 m to 100 m, and calculates the non-superimposed display region 404L as a depth of 0 m to 23 m.
  • FIG. 7B is a diagram illustrating a state in which the driver with a medium eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 described later displays on the HUD device 3 image information that guides the vehicle 1 to turn left at an intersection 20 meters deep as a guidance display for the driver at the middle eye position 100M,
  • the display object 201 of the virtual image 200 is superimposed and displayed on the intersection which is the previous real object.
  • FIG. 7C is a diagram illustrating a state in which the driver with a low eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 described later displays on the HUD device 3 image information that guides the vehicle 1 to turn left at an intersection 20 meters ahead as a guidance display for the driver at the low eye position 100L, Since the intersection which is a real object exists in the non-superimposed display area 404L, the display object 201 is superimposed on the foreground of the vehicle 1 without being superimposed on the intersection. It is assumed that the display position of the display object 201 at that time is predetermined.
  • the recommended display area 402 corresponding to the display object 201 that guides the left or right turn at the intersection is set to a depth of 0 m to 100 m.
  • the recommended display area 402 is not limited to this depth distance. It may vary depending on 201.
  • the recommended display area 402 corresponding to the display object 201 that emphasizes the white line on the road surface has a depth of 30 m to 80 m. Since the superimposed display area 403 and the non-superimposed display area 404 are changed according to the display recommended area 402, when the recommended display area 402 is changed according to the display object 201, the superimposed display area 403 and the non-superimposed display area 404 are displayed. Will be changed accordingly.
  • Information in which the correspondence relationship between the display object 201 and at least the recommended display area 402 is defined is stored in the database 24 of the HUD control device 2.
  • the database 24 defines not only the correspondence between the display object 201 and the recommended display area 402 but also the correspondence between the display object 201, the display visual recognition area 401, the superimposed display area 403, and the non-superimposed display area 404. May be stored. A method for using the information stored in the database 24 will be described later.
  • the database 24 does not need to be built in the HUD control device 2, and may be constructed on an external server device (not shown) that can communicate via the wireless communication device 56, for example.
  • the database 24 also stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200.
  • the image generation unit 23 acquires captured image information from an in-vehicle camera 51 and an out-of-vehicle camera 52 of the in-vehicle device 5 described later. Further, for example, the image generation unit 23 acquires the position information of the vehicle 1 from a GPS (Global Positioning System) receiver 53. Further, for example, the image generation unit 23 acquires detection information of an object existing around the vehicle 1 from the radar sensor 54. Further, for example, the image generation unit 23 acquires various types of vehicle information such as the traveling speed of the vehicle 1 from an ECU (Electronic Control Unit) 55. For example, the image generation unit 23 acquires various types of information from the wireless communication device 56. For example, the image generation unit 23 acquires navigation information and information indicating the shape of the road from the navigation device 57.
  • GPS Global Positioning System
  • the image generation unit 23 acquires detection information of an object existing around the vehicle 1 from the radar sensor 54.
  • the image generation unit 23 acquires various types of vehicle information such as the traveling speed of the vehicle 1 from an ECU (Elect
  • the image generation unit 23 determines a display object 201 to be displayed on the HUD device 3 from among a large number of display objects 201 stored in the database 24 using various information acquired from the in-vehicle device 5.
  • the display object 201 indicates the traveling speed of the vehicle 1, the lane in which the vehicle 1 is traveling, the traveling route of the vehicle 1, the position of other vehicles or obstacles existing around the vehicle 1, the traveling direction of the vehicle 1, and the like. Figures, characters, etc. to represent.
  • the image generation part 23 determines the display mode of the display thing 201, and produces
  • the image generation unit 23 outputs the generated image information to the display unit 31 of the HUD device 3. This image information is projected onto the windshield 300 by the HUD device 3 and visually recognized by the driver as a display object 201 of the virtual image 200.
  • the display mode of the display object 201 includes the shape, position, size, and color of the display object 201 in the virtual image 200 and whether or not the display object 201 is superimposed or non-superimposed on a real object.
  • the image generation unit 23 detects the position of the real object on which the display object 201 is superimposed and displayed using various types of information acquired from the in-vehicle device 5, and determines whether or not the real object exists in the superimposed display area 403. judge. When it is determined that a real object exists in the superimposed display area 403, the image generation unit 23 determines the display mode so that the display object 201 is visually recognized in a state of being superimposed on the real object.
  • the image generation unit 23 may generate binocular parallax image information in which the display object 201 is shifted in the left-right direction, or the image is displayed so that the display object 201 is reduced toward the vanishing point in the foreground of the vehicle 1. Information may be transformed. Further, the image generation unit 23 changes the size or color of the display object 201 according to the real object so that the display object 201 is visually recognized in a state of being superimposed on the real object such as an intersection, or the display object 201. A shadow may be added to the.
  • the image generation unit 23 displays the display object 201 so that the display object 201 is displayed below the virtual image 200 when the real object does not exist in the superimposed display area 403, that is, when the real object exists in the non-superimposed display area 404.
  • the display mode such as the shape, position, and size of the object 201 is determined.
  • the in-vehicle device 5 includes an in-vehicle camera 51.
  • the in-vehicle device 5 includes at least one of the outside camera 52, the GPS receiver 53, the radar sensor 54, the ECU 55, the wireless communication device 56, or the navigation device 57.
  • the in-vehicle camera 51 is a camera that images a passenger of the vehicle 1, and particularly images a driver.
  • the in-vehicle camera 51 outputs captured image information to the eye position detection unit 21.
  • the outside camera 52 is a camera that captures the periphery of the vehicle 1. For example, the outside camera 52 images a lane in which the vehicle 1 is traveling, other vehicles existing around the vehicle 1, obstacles, and the like.
  • the vehicle exterior camera 52 outputs captured image information to the HUD control device 2.
  • the GPS receiver 53 receives a GPS signal from a GPS satellite (not shown), and outputs position information corresponding to coordinates indicated by the GPS signal to the HUD control device 2.
  • the radar sensor 54 detects the direction and shape of an object existing around the vehicle 1 and further detects the distance between the vehicle 1 and the object.
  • the radar sensor 54 is, for example, a millimeter wave band radio wave sensor, an ultrasonic sensor, or an optical radar sensor.
  • the radar sensor 54 outputs detection information to the HUD control device 2.
  • the ECU 55 is a control unit that controls various operations of the vehicle 1.
  • the ECU 55 communicates with the HUD control device 2 by a communication method based on a CAN (Controller Area Network) standard, and outputs vehicle information indicating various operation states of the vehicle 1 to the HUD control device 2.
  • the vehicle information includes the traveling speed and steering angle of the vehicle 1.
  • the wireless communication device 56 is a communication device that is connected to an external network and acquires various types of information through wireless communication.
  • the wireless communication device 56 is, for example, a mobile communication terminal such as a receiver mounted on the vehicle 1 or a smartphone brought into the vehicle 1.
  • the network outside the vehicle is, for example, the Internet.
  • Various types of information include weather information around the vehicle 1 and information on facilities.
  • the wireless communication device 56 may acquire information such as the recommended display area 402 corresponding to the image information from the external server device through the external network.
  • the wireless communication device 56 outputs various information to the HUD control device 2.
  • the navigation device 57 searches for the travel route of the vehicle 1 based on the destination information set by the passenger of the vehicle 1, the map information stored in the storage device (not shown), and the position information acquired from the GPS receiver 53. I will guide you.
  • the storage device that stores the map information may be built on the vehicle 1 or may be built on an out-of-vehicle server device that can communicate via the wireless communication device 56.
  • the navigation device 57 provides navigation information such as the traveling direction of the vehicle 1 at a guidance point such as an intersection on the travel route, the expected arrival time to the waypoint or the destination, traffic information on the travel route of the vehicle 1 and surrounding roads, and the like. To the HUD control device 2.
  • the navigation device 57 may be an information device mounted in the vehicle 1 or a portable communication terminal such as a PND (Portable Navigation Device) or a smartphone brought into the vehicle 1.
  • a portable communication terminal such as a PND (Portable Navigation Device) or a smartphone brought into the vehicle 1.
  • FIG. 8 is a flowchart showing an operation example of the HUD control device 2 according to the first embodiment.
  • the HUD control device 2 repeatedly executes the processing shown in the flowchart of FIG. 8 during a period in which the engine of the vehicle 1 is on or a period in which the HUD system 4 is on.
  • the eye position detection unit 21 detects the position of the driver's eyes using the captured image information captured by the in-vehicle camera 51. Further, the eye position detection unit 21 determines whether the detected driver's eye position is a height position of high (H), medium (M), or low (L).
  • FIG. 9 is a diagram for explaining an example of eye position determination by the eye position detection unit 21 according to the first embodiment.
  • the eye position detection unit 21 uses the first threshold value TH1 (for example, 1.45 m) and the second threshold value TH2 (for example, 1.34 m) that are set in advance based on the iris, and thus the driver in the height direction.
  • the eye positions 100H, 100M, and 100L are determined.
  • “Ilipus” is an ellipse name that statistically represents the distribution of the positions of the eyes of the driver. Of the three ellipses, the larger the ellipse, the more statistically the eye positions of the driver are included.
  • the eye position detection unit 21 determines that the eye position is 100H higher, and is less than the first threshold value TH1 and the second value.
  • the threshold value TH2 is equal to or higher than the threshold value TH2
  • the threshold value is less than the second threshold value TH2
  • the eye position is determined to be 100L.
  • the height from the ground to the position of the driver's eye 100 is divided into three stages, but may be divided into any number of stages.
  • the third threshold value TH3 and the fourth threshold value TH4 in FIG. 9 will be described later.
  • the image generation unit 23 determines the display object 201 of image information to be displayed on the HUD device 3 based on various information acquired from the in-vehicle device 5.
  • the display object 201 is, for example, content such as an arrow that guides a travel route, content that highlights a white line, content that indicates that a surrounding vehicle has been detected, and the like.
  • the display object 201 is not limited to these contents.
  • the area changing unit 22 stores information on the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. From this, information on the position and size of the virtual image 200 is acquired.
  • the area changing unit 22 uses the eye position in the height direction of the driver determined by the eye position detecting unit 21 in step ST11 and the position and size of the virtual image 200 acquired from the database 24 to display the display visual recognition area 401. calculate.
  • the display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the position of the eye in the height direction may be stored in the database 24.
  • the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
  • the virtual image 200 projected on the windshield 300 may be distorted due to the distortion of the reflection mirror 32 and the windshield 300.
  • the region changing unit 22 uses the information indicating the correspondence between the eye position in the height direction and the amount of distortion of the virtual image 200 stored in the database 24 to calculate the distortion of the display object 201 in the image information. By correcting, the distortion of the display object 201 in the virtual image 200 projected onto the windshield 300 may be suppressed.
  • step ST ⁇ b> 14 the area changing unit 22 uses the information indicating the correspondence between the display object 201 and the display recommended area 402 stored in the database 24 to display according to the display object 201 determined by the image generation unit 23.
  • the recommended area 402 is specified.
  • step ST15 the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13 and the recommended display region 402 specified in step ST14, and the driver's eye determined by the eye position detection unit 21 in step ST11.
  • the superimposed display area 403 and the non-superimposed display area 404 are calculated according to the position.
  • the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
  • step ST16 the image generation unit 23 determines the display mode of the display object 201 determined in step ST12 using the superimposed display region 403 or the non-superimposed display region 404 calculated by the region changing unit 22 in step ST15. Then, the image generation unit 23 generates image information including the display object 201 having the determined display mode, and outputs the image information to the display unit 31 of the HUD device 3.
  • the HUD control device 2 includes the eye position detection unit 21, the image generation unit 23, and the region change unit 22.
  • the eye position detection unit 21 detects the position of the driver's eye 100 in the height direction.
  • the image generation unit 23 generates image information to be displayed on the display unit 31 of the HUD device 3.
  • the area changing unit 22 converts the image information generated by the image generating unit 23 into a virtual object in the foreground of the vehicle 1 according to the height direction position of the driver's eye 100 detected by the eye position detecting unit 21.
  • the superimposed display area 403 to be superimposed and displayed as 200 is changed. With this configuration, it is possible to make the driver visually recognize the virtual image 200 superimposed on a real object regardless of the position in the height direction of the driver's eye 100 without increasing the size of the HUD device 3.
  • the region changing unit 22 changes the superimposed display region 403 according to the height direction position of the driver's eye 100 and the image information generated by the image generating unit 23.
  • the area changing unit 22 can change the superimposed display area 403 more accurately by changing the superimposed display area 403 for each display object 201 included in the image information.
  • FIG. 1 Since the configuration of the HUD system 4 according to the second embodiment is the same as that shown in FIG. 1 of the first embodiment in the drawing, FIG. 1 is used below.
  • the eye position detection unit 21 detects the position of the driver's eye 100 in the depth direction in addition to the position of the driver's eye 100 in the height direction. For example, the eye position detection unit 21 uses the third threshold value TH3 and the fourth threshold value TH4 that are set in advance based on the iris shown in FIG. 9, and the positions 100F, 100C, and 100B of the driver's eyes in the depth direction. Determine.
  • the position of the driver's eyes in the depth direction is divided into three stages of a front eye position 100F, a center eye position 100C, and a rear eye position 100B by the third threshold value TH3 and the fourth threshold value TH4. Although divided, it can be divided into any number of stages.
  • FIG. 10 is a diagram illustrating a modification example of the superimposed display area 403 according to the position of the driver's eyes in the second embodiment, and illustrates a case where the position of the eyes in the depth direction is the driver behind.
  • the recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the depth direction.
  • both the front eye position 100F and the rear eye position 100B are located at a height of 1.4 m from the ground, the depth distance from the front eye position 100F to the virtual image 200 is 5 m, and the rear eye The depth distance from the position 100B to the virtual image 200 is 5.5 m.
  • the display visual recognition area 401F for the driver at the front eye position 100F and the display visual recognition area 401B for the driver at the rear eye position 100B are different.
  • the display visual recognition area 401F has a depth of 20 m to 100 m
  • the display visual recognition area 401B has a depth of 22 m to 110 m.
  • the display visual recognition areas 401F and 401B are calculated based on trigonometric functions.
  • the superimposed display area 403 and the non-superimposed display area 404 are also different depending on the position of the driver's eyes in the depth direction.
  • the superimposed display area 403F for the driver at the front eye position 100F has a depth of 20m to 100m
  • the non-superimposed display area 404F has a depth of 0m to 20m
  • the superimposed display area 403B for the driver at the rear eye position 100B has a depth of 22m to 100m
  • the non-superimposed display area 404B has a depth of 0m to 22m.
  • the eye position detection unit 21 detects the position of the driver's eye 100 in the depth direction.
  • the area changing unit 22 changes the superimposed display area 403 according to the position of the driver's eye 100 in the depth direction.
  • the region changing unit 22 depends on the position of the driver's eyes 100 in the depth direction in addition to the height direction.
  • the superimposed display area 403 is changed. Thereby, the area changing unit 22 can change the superimposed display area 403 more accurately.
  • the region changing unit 22 changes the superimposed display region 403 according to the position in the depth direction of the driver's eye 100 and the image information generated by the image generating unit 23.
  • the area changing unit 22 can change the superimposed display area 403 more accurately by changing the superimposed display area 403 for each display object 201 included in the image information.
  • FIG. 11 is a block diagram showing a main part of the HUD system 4 according to the third embodiment.
  • FIG. 12 is a configuration diagram of the HUD system 4 according to the third embodiment when mounted on a vehicle.
  • the HUD system 4 according to the third embodiment has a configuration in which an angle information acquisition unit 25 and a reflection mirror adjustment unit 33 are added to the HUD system 4 according to the first embodiment shown in FIG.
  • FIG. 11 and FIG. 12 the same or corresponding parts as those in FIG. 1 and FIG.
  • the HUD device 3 includes a reflection mirror adjustment unit 33 that adjusts the tilt angle of the reflection mirror 32.
  • the reflection mirror adjustment unit 33 is an actuator or the like.
  • the reflection mirror adjustment unit 33 adjusts the tilt angle of the reflection mirror 32 in accordance with a driver's instruction or the like.
  • the reflection mirror adjustment unit 33 outputs angle information including the adjusted tilt angle of the reflection mirror 32.
  • the HUD control device 2 includes an angle information acquisition unit 25.
  • the angle information acquisition unit 25 acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33 and outputs the angle information to the region change unit 22.
  • FIG. 13 is a diagram illustrating a correspondence relationship between the tilt angle of the reflection mirror 32 and the position of the virtual image 200 in the third embodiment.
  • the reflection mirror adjustment unit 33 changes the reflection angle of the light beam emitted from the display unit 31 on the reflection mirror 32, thereby changing the position of the virtual image 200 in the height direction and the depth direction.
  • region 401 will be changed and the reflective mirror 32 can be reduced in size.
  • the area changing unit 22 sets the height of the virtual image 200 according to the tilt angle of the reflection mirror 32.
  • the superimposed display area 403 and the non-superimposed display area 404 are calculated in consideration of the position in the direction and the depth direction.
  • the database 24 stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like.
  • information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like.
  • the position of the virtual image 200 information indicating a correspondence relationship between the tilt angle of the reflection mirror 32 and the position, size, distortion amount, and the like of the virtual image 200 is stored.
  • FIG. 14 is a flowchart showing an operation example of the HUD control device 2 according to the third embodiment. Steps ST11, ST12, ST14 and ST16 in FIG. 14 are the same operations as steps ST11, ST12, ST14 and ST16 in FIG.
  • step ST31 the angle information acquisition unit 25 acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33.
  • the region changing unit 22 acquires information on the position and size of the virtual image 200 corresponding to the tilt angle of the reflection mirror 32 acquired by the angle information acquisition unit 25 from the database 24, and specifies the position and size of the virtual image 200.
  • step ST13a the area changing unit 22 determines at least one position in the height direction or depth direction of the driver's eyes determined by the eye position detecting unit 21 in step ST11, and the position and size of the virtual image 200 specified in step ST31. Then, the display visual recognition area 401 is calculated.
  • a display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the eye position in at least one of the height direction and the depth direction may be stored in the database 24.
  • the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
  • step ST15a the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13a and the recommended display region 402 specified in step ST14, or the height direction determined by the eye position detection unit 21 in step ST11 or A superimposed display area 403 and a non-superimposed display area 404 corresponding to the position of at least one eye in the depth direction are calculated.
  • Information defining the correspondence relationship between the display object 201 and the eye position in at least one of the height direction and the depth direction and the superimposed display area 403 and the non-superimposed display area 404 may be stored in the database 24. .
  • the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
  • the HUD device 3 includes the reflection mirror adjustment unit 33 that adjusts the tilt angle of the reflection mirror 32.
  • the HUD control device 2 includes an angle information acquisition unit 25 that acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33.
  • the region changing unit 22 changes the superimposed display region 403 according to the position of the driver's eye 100 and the tilt angle of the reflecting mirror 32 acquired by the angle information acquiring unit 25.
  • the area changing unit 22 can change the superimposed display area 403 in response to the change in the display visual recognition area 401 accompanying the change in the tilt angle of the reflection mirror 32. Therefore, the area changing unit 22 can change the superimposed display area 403 more accurately.
  • FIG. 15 is a block diagram illustrating a main part of the HUD system 4 according to the fourth embodiment.
  • FIG. 16 is a configuration diagram when the HUD system 4 according to the fourth embodiment is mounted on a vehicle.
  • the HUD system 4 according to the fourth embodiment has a configuration in which a position information acquisition unit 26 and a HUD device position adjustment unit 34 are added to the HUD system 4 according to the first embodiment shown in FIG.
  • FIG. 15 and FIG. 16 the same or corresponding parts as those in FIG. 1 and FIG.
  • the HUD device 3 includes a HUD device position adjustment unit 34 that adjusts the tilt angle or the depth position of the HUD device 3 or a part of the HUD device 3.
  • the HUD device position adjustment unit 34 is an actuator or the like.
  • the HUD device position adjustment unit 34 is a tilt angle or depth position of the display unit 31, a tilt angle or depth position of the reflection mirror 32, or a HUD device 3 incorporating the display unit 31 and the reflection mirror 32 in accordance with a driver's instruction or the like. At least one of the tilt angle or the depth position of the casing is adjusted.
  • the HUD device position adjustment unit 34 outputs position information including the HUD device 3 or the adjusted tilt angle or depth position of a part of the HUD device 3.
  • the HUD control device 2 includes a position information acquisition unit 26.
  • the position information acquisition unit 26 acquires position information including the tilt angle or depth position of the HUD device 3 or a part of the HUD device 3 from the HUD device position adjustment unit 34 and outputs the position information to the region change unit 22.
  • FIG. 17 is a diagram illustrating a correspondence relationship between the depth position of the HUD device 3 and the position of the virtual image 200 in the fourth embodiment.
  • the position of the virtual image 200 in the height direction and the depth direction is changed by the HUD device position adjusting unit 34 changing the depth position of the housing 35 of the HUD device 3.
  • the area changing unit 22 calculates the superimposed display area 403 and the non-superimposed display area 404 in consideration of the depth position of the housing 35 of the HUD device 3.
  • the region changing unit 22 calculates the superimposed display region 403 and the non-superimposed display region 404 in consideration of these depth positions and tilt angles.
  • the database 24 stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. . Furthermore, in the fourth embodiment, as the position of the virtual image 200, the tilt angle or depth position of the display unit 31, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3 is selected. Information indicating a correspondence relationship between at least one and the position, size, amount of distortion, and the like of the virtual image 200 is stored.
  • FIG. 18 is a flowchart showing an operation example of the HUD control device 2 according to the fourth embodiment. Steps 11, ST12, ST14 and ST16 in FIG. 18 are the same operations as steps ST11, ST12, ST14 and ST16 in FIG.
  • the position information acquisition unit 26 obtains at least one of the tilt angle or depth position of the display unit 31, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3.
  • the included position information is acquired from the HUD device position adjustment unit 34.
  • the region changing unit 22 is at least one of the tilt angle or depth position of the display unit 31 acquired by the position information acquisition unit 26, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3.
  • Information on the position and size of the virtual image 200 corresponding to one is acquired from the database 24, and the position and size of the virtual image 200 are specified.
  • step ST13b the area changing unit 22 determines the position and size of the virtual image 200 identified in step ST41, and at least one position in the height direction or depth direction of the driver's eyes determined by the eye position detection unit 21 in step ST11. Then, the display visual recognition area 401 is calculated.
  • a display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the eye position in at least one of the height direction and the depth direction may be stored in the database 24.
  • the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
  • step ST15b the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13b and the recommended display region 402 specified in step ST14, or the height direction determined by the eye position detection unit 21 in step ST11 or A superimposed display area 403 and a non-superimposed display area 404 corresponding to the position of at least one eye in the depth direction are calculated.
  • Information defining the correspondence relationship between the display object 201 and the eye position in at least one of the height direction and the depth direction and the superimposed display area 403 and the non-superimposed display area 404 may be stored in the database 24. .
  • the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
  • the HUD device 3 includes the HUD device position adjustment unit 34 that adjusts at least one of the depth position or the tilt angle of the entire HUD device 3 or a part of the HUD device 3.
  • the HUD control device 2 includes a position information acquisition unit 26 that acquires position information including at least one of the depth position and the tilt angle of the entire HUD device 3 or a part of the HUD device 3 from the HUD device position adjustment unit 34.
  • the region changing unit 22 displays a superimposed image according to at least one of the position of the driver's eye 100 and the depth position or tilt angle of the entire HUD device 3 or a part of the HUD device 3 acquired by the position information acquisition unit 26.
  • the area 403 is changed.
  • the area changing unit 22 can change the superimposed display area 403 in response to the change of the display visual recognition area 401 accompanying the position change or the angle change of the HUD device 3 or a part of the HUD device 3. Therefore, the area changing unit 22 can change the superimposed display area 403 more accurately.
  • the HUD control device 2 is configured to control the HUD device 3, but may be configured to control an HMD (Head-Mounted Display) device. That is, the control target of the HUD control device 2 may be a display device that can display a stereoscopic image, such as HUD and HMD.
  • HMD Head-Mounted Display
  • FIGS. 19A and 19B are diagrams illustrating a hardware configuration example of the HUD control device 2 according to each embodiment.
  • the database 24 in the HUD control device 2 is a memory 1001.
  • the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 in the HUD control device 2 are realized by a processing circuit. That is, the HUD control device 2 includes a processing circuit for realizing the above functions.
  • the processing circuit may be the processing circuit 1000 as dedicated hardware, or may be the processor 1002 that executes a program stored in the memory 1001.
  • the processing circuit 1000 when the processing circuit is dedicated hardware, includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit). ), FPGA (Field Programmable Gate Array), or a combination thereof.
  • the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 may be realized by a plurality of processing circuits 1000.
  • a single processing circuit 1000 may be used.
  • the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 are software. , Firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 1001.
  • the processor 1002 reads out and executes the program stored in the memory 1001, thereby realizing the function of each unit. That is, the HUD control device 2 includes a memory 1001 for storing a program that, when executed by the processor 1002, results in the steps shown in the flowchart of FIG. It can also be said that this program causes a computer to execute the procedures or methods of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26.
  • the processor 1002 is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, or the like.
  • the memory 1001 may be a non-volatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), a flash memory, or a hard disk or a flexible disk.
  • the magnetic disk may be a CD (Compact Disc) or a DVD (Digital Versatile Disc).
  • a part is implement
  • the processing circuit in the HUD control device 2 can realize the above-described functions by hardware, software, firmware, or a combination thereof.
  • the HUD control device 2 has a function of controlling only the HUD device 3, but in addition to the function of controlling the HUD device 3, one or more display devices (for example, a center display) are provided.
  • the structure which also has the function to control may be sufficient.
  • the HUD control device 2 may be incorporated in a display control device that controls various display devices such as the HUD device 3 and the center display mounted on the vehicle.
  • the HUD control device 2 is configured to display the display object 201 on the HUD device 3 as the virtual image 200.
  • the HUD control device 2 may be configured to output information related to the display object 201 from a speaker. For example, when the HUD control device 2 presents to the driver information that guides the vehicle 1 to turn left at an intersection 75 m deep, the virtual display 200 displays an image that guides the operator to turn left at the intersection. The display object 201 is displayed on the HUD device 3, and in the non-overlapping display area 404, a sound is output from a speaker for guiding the user to turn left at the intersection.
  • the HUD control device is adapted to the difference in the position of the eyes of the driver without increasing the size of the HUD device, and is therefore used for a HUD control device for controlling an AR-HUD device for a vehicle and the like. Suitable for
  • HUD control device 3 HUD device, 4 HUD system, 5 in-vehicle device, 21 eye position detection unit, 22 region change unit, 23 image generation unit, 24 database, 25 angle information acquisition unit, 26 position information acquisition unit , 31 display unit, 32 reflection mirror, 33 reflection mirror adjustment unit, 34 HUD device position adjustment unit (position adjustment unit), 35 housing, 51 in-vehicle camera, 52 outside camera, 53 GPS receiver, 54 radar sensor, 55 ECU 56, wireless communication device, 57 navigation device, 100 driver's eye, 100B, 100C, 100F, 100H, 100L, 100M driver's eye position, 200, 200a virtual image, 201 display object, 202, 203 display viewing area, 300 Windshield (projected surface), 401, 40 B, 401F, 401H, 401L, 401M Display viewing area, 402 Recommended display area, 403, 403B, 403F, 403H, 403L, 403M Overlaid display area, 404, 404B, 404F,

Abstract

La présente invention concerne un dispositif d'affichage tête haute (3) comportant : une unité d'affichage (31) qui affiche des informations d'image ; et un miroir réfléchissant (32) qui réfléchit les informations d'image affichées par l'unité d'affichage (31) de manière à projeter les informations d'image sur un pare-brise (300), les informations d'image étant superposées et affichées sous la forme d'une image virtuelle (200) sur le premier plan d'un véhicule (1) visuellement reconnu par un conducteur. Un dispositif de commande d'affichage tête haute (2) comprend : une unité de détection de position d'œil (21) qui détecte les positions des yeux (100) du conducteur ; une unité de génération d'image (23) qui génère des informations d'image à afficher sur l'unité d'affichage (31) ; et une unité de changement de région (22) qui, selon les positions des yeux (100) du conducteur détectées par l'unité de détection de position d'œil (21), change une région d'affichage de superposition (403) où un affichage de superposition des informations d'image générées par l'unité de génération d'image (23) sou forme d'image virtuelle (200) est effectué sur un objet réel situé dans le premier plan du véhicule (1).
PCT/JP2018/019695 2018-05-22 2018-05-22 Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute WO2019224922A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/019695 WO2019224922A1 (fr) 2018-05-22 2018-05-22 Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/019695 WO2019224922A1 (fr) 2018-05-22 2018-05-22 Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute

Publications (1)

Publication Number Publication Date
WO2019224922A1 true WO2019224922A1 (fr) 2019-11-28

Family

ID=68616850

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/019695 WO2019224922A1 (fr) 2018-05-22 2018-05-22 Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute

Country Status (1)

Country Link
WO (1) WO2019224922A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112114427A (zh) * 2020-09-08 2020-12-22 中国第一汽车股份有限公司 Hud投影高度的调节方法、装置、设备及车辆
CN112947761A (zh) * 2021-03-26 2021-06-11 芜湖汽车前瞻技术研究院有限公司 Ar-hud系统的虚像位置调整方法、装置及存储介质
CN113109941A (zh) * 2020-01-10 2021-07-13 未来(北京)黑科技有限公司 一种分层成像的抬头显示系统
CN114779470A (zh) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 一种增强现实抬头显示系统的显示方法
CN114821723A (zh) * 2022-04-27 2022-07-29 江苏泽景汽车电子股份有限公司 一种投影像面调节方法、装置、设备及存储介质
CN114816292A (zh) * 2021-01-27 2022-07-29 本田技研工业株式会社 抬头显示控制系统以及抬头显示的显示方法
CN114816291A (zh) * 2021-01-27 2022-07-29 本田技研工业株式会社 抬头显示控制系统以及抬头显示的显示方法
JP7456290B2 (ja) 2020-05-28 2024-03-27 日本精機株式会社 ヘッドアップディスプレイ装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014210537A (ja) * 2013-04-19 2014-11-13 トヨタ自動車株式会社 ヘッドアップディスプレイ装置
JP2015060180A (ja) * 2013-09-20 2015-03-30 日本精機株式会社 ヘッドアップディスプレイ装置
JP2016101805A (ja) * 2014-11-27 2016-06-02 パイオニア株式会社 表示装置、制御方法、プログラム、及び記憶媒体
WO2017090464A1 (fr) * 2015-11-25 2017-06-01 日本精機株式会社 Affichage tête haute
WO2017138242A1 (fr) * 2016-02-12 2017-08-17 日立マクセル株式会社 Dispositif d'affichage d'image pour véhicule
WO2018030320A1 (fr) * 2016-08-10 2018-02-15 日本精機株式会社 Dispositif d'affichage de véhicule

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014210537A (ja) * 2013-04-19 2014-11-13 トヨタ自動車株式会社 ヘッドアップディスプレイ装置
JP2015060180A (ja) * 2013-09-20 2015-03-30 日本精機株式会社 ヘッドアップディスプレイ装置
JP2016101805A (ja) * 2014-11-27 2016-06-02 パイオニア株式会社 表示装置、制御方法、プログラム、及び記憶媒体
WO2017090464A1 (fr) * 2015-11-25 2017-06-01 日本精機株式会社 Affichage tête haute
WO2017138242A1 (fr) * 2016-02-12 2017-08-17 日立マクセル株式会社 Dispositif d'affichage d'image pour véhicule
WO2018030320A1 (fr) * 2016-08-10 2018-02-15 日本精機株式会社 Dispositif d'affichage de véhicule

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113109941A (zh) * 2020-01-10 2021-07-13 未来(北京)黑科技有限公司 一种分层成像的抬头显示系统
CN113109941B (zh) * 2020-01-10 2023-02-10 未来(北京)黑科技有限公司 一种分层成像的抬头显示系统
JP7456290B2 (ja) 2020-05-28 2024-03-27 日本精機株式会社 ヘッドアップディスプレイ装置
CN112114427A (zh) * 2020-09-08 2020-12-22 中国第一汽车股份有限公司 Hud投影高度的调节方法、装置、设备及车辆
CN114816292A (zh) * 2021-01-27 2022-07-29 本田技研工业株式会社 抬头显示控制系统以及抬头显示的显示方法
CN114816291A (zh) * 2021-01-27 2022-07-29 本田技研工业株式会社 抬头显示控制系统以及抬头显示的显示方法
CN112947761A (zh) * 2021-03-26 2021-06-11 芜湖汽车前瞻技术研究院有限公司 Ar-hud系统的虚像位置调整方法、装置及存储介质
CN112947761B (zh) * 2021-03-26 2023-07-28 芜湖汽车前瞻技术研究院有限公司 Ar-hud系统的虚像位置调整方法、装置及存储介质
CN114779470A (zh) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 一种增强现实抬头显示系统的显示方法
CN114821723A (zh) * 2022-04-27 2022-07-29 江苏泽景汽车电子股份有限公司 一种投影像面调节方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
WO2019224922A1 (fr) Dispositif de commande d'affichage tête haute, système d'affichage tête haute et procédé de commande d'affichage tête haute
RU2746380C2 (ru) Индикатор на лобовом стекле с переменной фокальной плоскостью
US10852818B2 (en) Information provision device and information provision method
JP6830936B2 (ja) ダイクロイックミラーを使用する、自律走行車のための3d−lidarシステム
CN111433067B (zh) 平视显示装置及其显示控制方法
JP6201690B2 (ja) 車両情報投影システム
US9849832B2 (en) Information presentation system
JP6342704B2 (ja) 表示装置
US11525694B2 (en) Superimposed-image display device and computer program
JP6981377B2 (ja) 車両用表示制御装置、車両用表示制御方法、及び制御プログラム
US20190241070A1 (en) Display control device and display control method
JP2010143520A (ja) 車載用表示システム及び表示方法
US11325470B2 (en) Method, device and computer-readable storage medium with instructions for controlling a display of an augmented-reality head-up display device for a transportation vehicle
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
JP6225379B2 (ja) 車両情報投影システム
US20210152812A1 (en) Display control device, display system, and display control method
JP2008236711A (ja) 運転支援方法及び運転支援装置
JP6945933B2 (ja) 表示システム
JP6186905B2 (ja) 車載表示装置およびプログラム
JP2018020779A (ja) 車両情報投影システム
JP6873350B2 (ja) 表示制御装置及び表示制御方法
KR101637298B1 (ko) 증강 현실을 이용한 차량용 헤드 업 디스플레이 장치
JP2020158014A (ja) ヘッドアップディスプレイ装置、表示制御装置、及び表示制御プログラム
WO2021171397A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage, et procédé de commande d'affichage
JP2018167669A (ja) ヘッドアップディスプレイ装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919434

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18919434

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP