WO2019224922A1 - Head-up display control device, head-up display system, and head-up display control method - Google Patents

Head-up display control device, head-up display system, and head-up display control method Download PDF

Info

Publication number
WO2019224922A1
WO2019224922A1 PCT/JP2018/019695 JP2018019695W WO2019224922A1 WO 2019224922 A1 WO2019224922 A1 WO 2019224922A1 JP 2018019695 W JP2018019695 W JP 2018019695W WO 2019224922 A1 WO2019224922 A1 WO 2019224922A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
unit
driver
head
image information
Prior art date
Application number
PCT/JP2018/019695
Other languages
French (fr)
Japanese (ja)
Inventor
脩平 太田
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/019695 priority Critical patent/WO2019224922A1/en
Publication of WO2019224922A1 publication Critical patent/WO2019224922A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments

Definitions

  • the present invention relates to a head-up display control device that controls a head-up display device for a vehicle, a head-up display system that includes a head-up display device, a head-up display control device, and a head-up display control that controls the head-up display device. It is about the method.
  • a vehicle HUD (Head Up Display) device is a display device that allows a driver to visually recognize image information without greatly moving his / her line of sight from the front visual field.
  • an AR-HUD device using AR is more intuitive than an existing HUD device by superimposing and displaying image information such as a road guide arrow on a real object such as a road.
  • Information can be presented to the driver in an easy-to-understand manner (see, for example, Patent Document 1).
  • the AR-HUD device described in Patent Document 1 has a configuration in which an image displayed on an image display device such as a projector or a liquid crystal display is reflected by a mirror and projected onto a windshield of a vehicle.
  • the driver views the image projected as a virtual image in front of the transparent windshield by viewing the image projected on the windshield.
  • a display visual recognition area where the driver can visually recognize the virtual image needs to exist within the recommended display area where the driver can comfortably visually recognize the virtual image. Since the driver's eye height and the look-down angle with respect to the virtual image are different for each position of the driver's eyes, the display viewing area is also different for each position of the driver's eyes. Therefore, in order for the driver to visually recognize a virtual image regardless of the position of the driver's eyes, the AR-HUD device needs to enlarge the display visual recognition area by increasing the virtual image. However, in order to enlarge the virtual image, it is necessary to increase the size of the video display device, the mirror, and the like. As a result, there is a problem that the AR-HUD device is increased in size. Since the space on the vehicle side where the AR-HUD device is installed is limited, it is not preferable to increase the size of the AR-HUD device.
  • the present invention has been made in order to solve the above-described problems.
  • a virtual image is superimposed on a real object to the driver regardless of the position of the driver's eyes without increasing the size of the head-up display device.
  • the purpose is to make the state visible.
  • a head-up display control device includes a display unit that displays image information, and a reflection mirror that reflects the image information displayed by the display unit and projects the image information onto a projection surface, and is visually recognized by a driver.
  • a head-up display control device that controls a head-up display device that superimposes and displays image information as a virtual image on the foreground of a vehicle, an eye position detection unit that detects the position of the driver's eyes, and image information that is displayed on the display unit
  • the image generation unit that generates the image and the superimposed display that displays the image information generated by the image generation unit as a virtual image superimposed on the real object in the foreground of the vehicle according to the position of the driver's eyes detected by the eye position detection unit And an area changing unit for changing the area.
  • the superimposition display area for superimposing and displaying the image information as a virtual image on the real object in the foreground of the vehicle is changed according to the position of the eyes of the driver. Instead, regardless of the position of the driver's eyes, the driver can visually recognize the virtual image superimposed on the real object.
  • FIG. It is a block diagram which shows the principal part of the HUD system which concerns on Embodiment 1.
  • FIG. It is a block diagram at the time of vehicle mounting of the HUD system which concerns on Embodiment 1.
  • FIG. It is a figure explaining the difference of the display visual recognition area according to the position of the height direction of a driver
  • FIG. 4 is a reference example for helping understanding of the HUD system according to the first embodiment, in accordance with the position in the height direction of the driver's eyes when the height direction of the virtual image is increased as compared with FIG. 3. It is a figure explaining the difference in the display visual recognition area. It is a reference example for helping understanding of the HUD system according to the first embodiment, and a driver whose eye position in the height direction is high when the size of the virtual image in the height direction is larger than that in FIG. It is a figure which shows the state which visually recognized the vehicle foreground. It is a reference example for helping understanding of the HUD system according to the first embodiment, and a driver whose eye position in the height direction is low when the size of the virtual image in the height direction is larger than that in FIG.
  • FIG. 8 is a diagram illustrating a change example of the superimposed display area according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a high eye position in the height direction.
  • it is a figure which shows the state which the driver
  • it is a figure which shows the state which the driver
  • FIG. 8 is a diagram illustrating a change example of the superimposed display area according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a low eye position in the height direction.
  • it is a figure which shows the state which the driver
  • it is a figure which shows the state which the driver
  • 3 is a flowchart illustrating an operation example of the HUD control device according to the first embodiment.
  • 6 is a diagram illustrating an example of eye position determination performed by an eye position detection unit according to Embodiment 1.
  • FIG. 10 is a block diagram illustrating a main part of a HUD system according to a third embodiment.
  • FIG. 10 is a configuration diagram when the HUD system according to Embodiment 3 is mounted on a vehicle.
  • it is a figure which shows the correspondence of the tilt angle of a reflective mirror, and the position of a virtual image.
  • 12 is a flowchart illustrating an operation example of the HUD control device according to the third embodiment.
  • FIG. 10 is a flowchart illustrating an operation example of the HUD control device according to the fourth embodiment. It is a figure which shows the hardware structural example of the HUD control apparatus which concerns on each embodiment. It is a figure which shows the hardware structural example of the HUD control apparatus which concerns on each embodiment.
  • FIG. 1 is a block diagram showing a main part of the HUD system 4 according to the first embodiment.
  • FIG. 2 is a configuration diagram of the HUD system 4 according to Embodiment 1 when mounted on a vehicle.
  • a vehicle 1 is equipped with a HUD system 4 including a HUD control device 2 and a HUD device 3, and an in-vehicle device 5.
  • the HUD device 3 includes a display unit 31 and a reflection mirror 32.
  • the display unit 31 displays image information generated by the HUD control device 2.
  • a display such as a liquid crystal display, a projector, or a laser light source is used.
  • the reflection mirror 32 reflects the display light of the image information displayed by the display unit 31 and projects it onto the windshield 300.
  • the driver visually recognizes the display object 201 of the virtual image 200 that is coupled through the windshield 300 from the position of the eye 100.
  • the windshield 300 is a projection surface of the virtual image 200.
  • the projected surface is not limited to the windshield 300, and may be a half mirror called a combiner.
  • the HUD control device 2 includes an eye position detection unit 21, a region change unit 22, an image generation unit 23, and a database 24.
  • the eye position detection unit 21 acquires captured image information of a driver captured by an in-vehicle camera 51 described later, analyzes the acquired captured image information, and detects the position of the driver's eye 100 in the height direction.
  • the eye position detection unit 21 may detect the positions of the left eye and the right eye of the driver as the position of the driver's eye 100, or may detect the center positions of the left eye and the right eye. Further, the eye position detection unit 21 may estimate the center positions of the left eye and the right eye from the driver's face position in the captured image information.
  • the area changing unit 22 changes the superimposed display area according to the position of the driver's eye 100 detected by the eye position detecting unit 21. Specifically, the area changing unit 22 calculates a display visual recognition area based on the position of the driver's eye 100 and the position of the virtual image 200. Further, the region changing unit 22 specifies a display recommended region corresponding to the display object 201 of the virtual image 200, and changes the superimposed display region based on the display visual recognition region and the display recommended region. The area changing unit 22 also changes the non-superimposed display area based on the display visual recognition area and the display recommended area.
  • FIG. 3 is a diagram for explaining a difference between the display visual recognition areas 401H, 401M, and 401L corresponding to the positions 100H, 100M, and 100L in the height direction of the driver's eyes.
  • the display visual recognition area, the display recommendation area, the superimposed display area, and the non-superimposed display area are assumed to be depth distances from the driver's eyes to the front, with the driver's eyes positioned at 0 m. expressed.
  • the display visual recognition area 401 is an area where the driver can visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the vehicle foreground, and differs depending on the position of the driver's eyes in the height direction.
  • the position in the height direction of the driver's eyes is divided into three stages of high (H), medium (M), and low (L), and the high eye position 100H is medium. It is called the eye position 100M and the low eye position 100L.
  • the high eye position 100H is 1.46 m
  • the medium eye position 100M is 1.4 m
  • the low eye position 100L is 1.34 m.
  • the display viewing area 401H is 18 m at the high eye position 100H.
  • the display visual recognition area 401M is 20m to 100m at the medium eye position 100M to 56m, and the display visual recognition area 401L is 23m to 670m at the low eye position 100L.
  • These display visual recognition areas 401H, 401M, and 401L are calculated based on trigonometric functions.
  • the display recommendation area 402 is an area where the driver can comfortably visually recognize the display object 201 of the virtual image 200.
  • This display recommendation area 402 is predetermined according to the display object 201 of the virtual image 200. For example, when the image information that guides the vehicle 1 to turn left at an intersection 75 m deep is displayed as the display object 201 of the virtual image 200, the depth 0 m to 100 m is the recommended display area 402.
  • the display visual recognition area 401 includes the display recommendation area 402.
  • the display visual recognition areas 401H, 401M, and 401L that differ depending on the driver's eye positions 100H, 100M, and 100L do not always include the display recommendation areas 402 that differ depending on the display object 201. Therefore, in the first embodiment, the area changing unit 22 handles the display recommendation area 402 by dividing it into a superimposed display area included in the display visual recognition area 401 and a non-superimposed display area not included in the display visual recognition area.
  • the superimposed display area 403 is an area in which the driver can visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the foreground of the vehicle.
  • the superimposed display area 403 is an area where the display visual recognition area 401 and the recommended display area 402 overlap. is there.
  • the superimposed display area 403 is 20 m to 100 m in depth. is there.
  • the non-overlapping display area 404 is an area where the driver cannot visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the foreground of the vehicle.
  • the display visual recognition area 401 and the recommended display area 402 do not overlap. It is an area. Note that the non-superimposed display area 404 cannot be visually recognized when the driver superimposes the display object 201 of the virtual image 200 on the real object in the vehicle foreground, but can be visually recognized when superimposed on the vehicle foreground.
  • the non-superimposed display area 404M is 0 m to 20 m in depth. It is.
  • FIG. 4A is a diagram showing a state in which a driver with a medium eye position in the height direction visually recognizes the vehicle foreground.
  • FIG. 4B is a diagram illustrating a state where a driver with a high eye position in the height direction visually recognizes the vehicle foreground.
  • FIG. 4C is a diagram illustrating a state where a driver with a low eye position in the height direction visually recognizes the vehicle foreground.
  • 4A, 4B, and 4C are reference examples for helping understanding of the superimposed display areas 403H, 403M, and 403L (FIG. 6A and the like described later) of the first embodiment, and FIG. 4A, FIG. 4B, and FIG.
  • the superimposed display area 403 in 4C is the same regardless of the driver's eye positions 100H, 100M, and 100L.
  • FIG. 5A is a reference example for helping understanding of the HUD system 4 according to the first embodiment.
  • FIG. 5B is a reference example for helping understanding of the HUD system 4 according to the first embodiment.
  • FIG. 5C is a reference example for helping understanding of the HUD system 4 according to the first embodiment.
  • the size of the virtual image 200a in the height direction is larger than that in FIG. It is a figure which shows the state which the driver
  • the display visual recognition areas 401H, 401M, and 401L are different according to the positions 100H, 100M, and 100L of the driver's eyes, and the superimposed display area 403 is also different according to the display visual recognition areas 401H, 401M, and 401L.
  • the difference in the superimposed display area 403 corresponding to the eye positions 100H, 100M, and 100L is not considered. Therefore, for example, as shown in FIG. 4A, the driver at the middle eye position 100M can visually recognize the entire superimposed display area 403 in the display visual recognition area 401M, while the driver's eyes can be seen as shown in FIG. 4B.
  • the superimposed display area 403 When the position 100H is high, the superimposed display area 403 does not fit in the display visual recognition area 401H, and there is a superimposed display impossible area 410 in which the display object 201 of the virtual image 200 cannot be superimposed and displayed on the real object in the vehicle foreground. Similarly, as shown in FIG. 4C, the superimposed display area 403 does not fit in the display visual recognition area 401L even at the position 100L of the driver's eyes, and the superimposed display cannot display the display object 201 of the virtual image 200 on the real object in the vehicle foreground. A disabled area 410 exists.
  • the size of the virtual image 200a in the height direction is illustrated as in FIG. 5A of the reference example. 3 is required to be larger than the virtual image 200.
  • the display visual recognition area 401H is expanded from the depth of 18 m to 56 m to the depth of 18 m to 100 m, and the display visual recognition area 401L is changed from the depth of 23 m to 670 m.
  • the depth extends from 20m to 670m. That is, as shown in FIG.
  • the display visual recognition area 202 for the driver whose eye position 100 ⁇ / b> H in the height direction is high is added to the upper end of the virtual image 200.
  • a display visual recognition area 203 for the driver whose eye position 100L in the height direction is low is added to the lower end of the virtual image 200.
  • the display visual recognition area 203 for the driver having a low eye position 100L which is added to the lower end of the virtual image 200, is an area unnecessary for a driver having a high eye position 100H.
  • the display visual recognition area 202 for the driver with the high eye position 100H added to the upper end of the virtual image 200 is an area unnecessary for the driver with the low eye position 100L.
  • the size of the virtual images 200, 200a in the height direction is increased from 0.28 m to 0.38 m, which is 1.3 times or more just by changing the position of the eye by ⁇ 0.06 m in the height direction.
  • the vehicle-side space such as a dashboard in which the HUD device 3 is installed is limited, it is not preferable to increase the size of the HUD device 3.
  • the region changing unit 22 changes the superimposed display region 403 according to the position of the driver's eye 100, so that the position of the eye can be adjusted without increasing the size of the HUD device 3. Make it possible for different drivers to comfortably see the virtual image.
  • FIG. 6A is a diagram illustrating a modification example of the superimposed display region 403 according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a high eye position in the height direction.
  • the recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the height direction.
  • the area changing unit 22 calculates the display visual recognition area 401H for the driver at the high eye position 100H as a depth of 18 m to 56 m.
  • the area changing unit 22 calculates the superimposed display area 403H for the driver at the high eye position 100H as the depth of 18 m to 56 m, and calculates the non-superimposed display area 404H as the depth of 0 m to 18 m and 56 m to 100 m.
  • FIG. 6B is a diagram illustrating a state in which the driver with a medium eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 to be described later displays, on the HUD device 3, image information that guides the vehicle 1 turning left at an intersection 75 m in depth as guidance display for the driver at the middle eye position 100M.
  • the display object 201 of the virtual image 200 is superimposed and displayed on the intersection which is the previous real object.
  • FIG. 6C is a diagram illustrating a state in which the driver with a high eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 to be described later displays, as guidance information for the driver at the high eye position 100H, when the HUD device 3 displays image information for guiding the vehicle 1 to turn left at an intersection 75 m ahead. Since the intersection which is a real object exists in the non-superimposed display area 404H, the display object 201 is superimposed on the foreground of the vehicle 1 without being superimposed on the intersection. It is assumed that the display position of the display object 201 at that time is predetermined. In the example of FIG. 6C, the display object 201 is displayed below the virtual image 200.
  • the image generation unit 23 displays the virtual image 200 on the intersection that is a real object as illustrated in FIG. 6B.
  • the display object 201 is displayed in a superimposed manner.
  • FIG. 7A is a diagram illustrating a modification example of the superimposed display region 403 according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a low eye position in the height direction.
  • the recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the height direction.
  • the area changing unit 22 calculates the display visual recognition area 401L for the driver at the low eye position 100L as the depth of 23 m to 670 m.
  • the region changing unit 22 calculates the superimposed display region 403L for the driver at the low eye position 100L as a depth of 23 m to 100 m, and calculates the non-superimposed display region 404L as a depth of 0 m to 23 m.
  • FIG. 7B is a diagram illustrating a state in which the driver with a medium eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 described later displays on the HUD device 3 image information that guides the vehicle 1 to turn left at an intersection 20 meters deep as a guidance display for the driver at the middle eye position 100M,
  • the display object 201 of the virtual image 200 is superimposed and displayed on the intersection which is the previous real object.
  • FIG. 7C is a diagram illustrating a state in which the driver with a low eye position in the height direction visually recognizes the vehicle foreground in the first embodiment.
  • the image generation unit 23 described later displays on the HUD device 3 image information that guides the vehicle 1 to turn left at an intersection 20 meters ahead as a guidance display for the driver at the low eye position 100L, Since the intersection which is a real object exists in the non-superimposed display area 404L, the display object 201 is superimposed on the foreground of the vehicle 1 without being superimposed on the intersection. It is assumed that the display position of the display object 201 at that time is predetermined.
  • the recommended display area 402 corresponding to the display object 201 that guides the left or right turn at the intersection is set to a depth of 0 m to 100 m.
  • the recommended display area 402 is not limited to this depth distance. It may vary depending on 201.
  • the recommended display area 402 corresponding to the display object 201 that emphasizes the white line on the road surface has a depth of 30 m to 80 m. Since the superimposed display area 403 and the non-superimposed display area 404 are changed according to the display recommended area 402, when the recommended display area 402 is changed according to the display object 201, the superimposed display area 403 and the non-superimposed display area 404 are displayed. Will be changed accordingly.
  • Information in which the correspondence relationship between the display object 201 and at least the recommended display area 402 is defined is stored in the database 24 of the HUD control device 2.
  • the database 24 defines not only the correspondence between the display object 201 and the recommended display area 402 but also the correspondence between the display object 201, the display visual recognition area 401, the superimposed display area 403, and the non-superimposed display area 404. May be stored. A method for using the information stored in the database 24 will be described later.
  • the database 24 does not need to be built in the HUD control device 2, and may be constructed on an external server device (not shown) that can communicate via the wireless communication device 56, for example.
  • the database 24 also stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200.
  • the image generation unit 23 acquires captured image information from an in-vehicle camera 51 and an out-of-vehicle camera 52 of the in-vehicle device 5 described later. Further, for example, the image generation unit 23 acquires the position information of the vehicle 1 from a GPS (Global Positioning System) receiver 53. Further, for example, the image generation unit 23 acquires detection information of an object existing around the vehicle 1 from the radar sensor 54. Further, for example, the image generation unit 23 acquires various types of vehicle information such as the traveling speed of the vehicle 1 from an ECU (Electronic Control Unit) 55. For example, the image generation unit 23 acquires various types of information from the wireless communication device 56. For example, the image generation unit 23 acquires navigation information and information indicating the shape of the road from the navigation device 57.
  • GPS Global Positioning System
  • the image generation unit 23 acquires detection information of an object existing around the vehicle 1 from the radar sensor 54.
  • the image generation unit 23 acquires various types of vehicle information such as the traveling speed of the vehicle 1 from an ECU (Elect
  • the image generation unit 23 determines a display object 201 to be displayed on the HUD device 3 from among a large number of display objects 201 stored in the database 24 using various information acquired from the in-vehicle device 5.
  • the display object 201 indicates the traveling speed of the vehicle 1, the lane in which the vehicle 1 is traveling, the traveling route of the vehicle 1, the position of other vehicles or obstacles existing around the vehicle 1, the traveling direction of the vehicle 1, and the like. Figures, characters, etc. to represent.
  • the image generation part 23 determines the display mode of the display thing 201, and produces
  • the image generation unit 23 outputs the generated image information to the display unit 31 of the HUD device 3. This image information is projected onto the windshield 300 by the HUD device 3 and visually recognized by the driver as a display object 201 of the virtual image 200.
  • the display mode of the display object 201 includes the shape, position, size, and color of the display object 201 in the virtual image 200 and whether or not the display object 201 is superimposed or non-superimposed on a real object.
  • the image generation unit 23 detects the position of the real object on which the display object 201 is superimposed and displayed using various types of information acquired from the in-vehicle device 5, and determines whether or not the real object exists in the superimposed display area 403. judge. When it is determined that a real object exists in the superimposed display area 403, the image generation unit 23 determines the display mode so that the display object 201 is visually recognized in a state of being superimposed on the real object.
  • the image generation unit 23 may generate binocular parallax image information in which the display object 201 is shifted in the left-right direction, or the image is displayed so that the display object 201 is reduced toward the vanishing point in the foreground of the vehicle 1. Information may be transformed. Further, the image generation unit 23 changes the size or color of the display object 201 according to the real object so that the display object 201 is visually recognized in a state of being superimposed on the real object such as an intersection, or the display object 201. A shadow may be added to the.
  • the image generation unit 23 displays the display object 201 so that the display object 201 is displayed below the virtual image 200 when the real object does not exist in the superimposed display area 403, that is, when the real object exists in the non-superimposed display area 404.
  • the display mode such as the shape, position, and size of the object 201 is determined.
  • the in-vehicle device 5 includes an in-vehicle camera 51.
  • the in-vehicle device 5 includes at least one of the outside camera 52, the GPS receiver 53, the radar sensor 54, the ECU 55, the wireless communication device 56, or the navigation device 57.
  • the in-vehicle camera 51 is a camera that images a passenger of the vehicle 1, and particularly images a driver.
  • the in-vehicle camera 51 outputs captured image information to the eye position detection unit 21.
  • the outside camera 52 is a camera that captures the periphery of the vehicle 1. For example, the outside camera 52 images a lane in which the vehicle 1 is traveling, other vehicles existing around the vehicle 1, obstacles, and the like.
  • the vehicle exterior camera 52 outputs captured image information to the HUD control device 2.
  • the GPS receiver 53 receives a GPS signal from a GPS satellite (not shown), and outputs position information corresponding to coordinates indicated by the GPS signal to the HUD control device 2.
  • the radar sensor 54 detects the direction and shape of an object existing around the vehicle 1 and further detects the distance between the vehicle 1 and the object.
  • the radar sensor 54 is, for example, a millimeter wave band radio wave sensor, an ultrasonic sensor, or an optical radar sensor.
  • the radar sensor 54 outputs detection information to the HUD control device 2.
  • the ECU 55 is a control unit that controls various operations of the vehicle 1.
  • the ECU 55 communicates with the HUD control device 2 by a communication method based on a CAN (Controller Area Network) standard, and outputs vehicle information indicating various operation states of the vehicle 1 to the HUD control device 2.
  • the vehicle information includes the traveling speed and steering angle of the vehicle 1.
  • the wireless communication device 56 is a communication device that is connected to an external network and acquires various types of information through wireless communication.
  • the wireless communication device 56 is, for example, a mobile communication terminal such as a receiver mounted on the vehicle 1 or a smartphone brought into the vehicle 1.
  • the network outside the vehicle is, for example, the Internet.
  • Various types of information include weather information around the vehicle 1 and information on facilities.
  • the wireless communication device 56 may acquire information such as the recommended display area 402 corresponding to the image information from the external server device through the external network.
  • the wireless communication device 56 outputs various information to the HUD control device 2.
  • the navigation device 57 searches for the travel route of the vehicle 1 based on the destination information set by the passenger of the vehicle 1, the map information stored in the storage device (not shown), and the position information acquired from the GPS receiver 53. I will guide you.
  • the storage device that stores the map information may be built on the vehicle 1 or may be built on an out-of-vehicle server device that can communicate via the wireless communication device 56.
  • the navigation device 57 provides navigation information such as the traveling direction of the vehicle 1 at a guidance point such as an intersection on the travel route, the expected arrival time to the waypoint or the destination, traffic information on the travel route of the vehicle 1 and surrounding roads, and the like. To the HUD control device 2.
  • the navigation device 57 may be an information device mounted in the vehicle 1 or a portable communication terminal such as a PND (Portable Navigation Device) or a smartphone brought into the vehicle 1.
  • a portable communication terminal such as a PND (Portable Navigation Device) or a smartphone brought into the vehicle 1.
  • FIG. 8 is a flowchart showing an operation example of the HUD control device 2 according to the first embodiment.
  • the HUD control device 2 repeatedly executes the processing shown in the flowchart of FIG. 8 during a period in which the engine of the vehicle 1 is on or a period in which the HUD system 4 is on.
  • the eye position detection unit 21 detects the position of the driver's eyes using the captured image information captured by the in-vehicle camera 51. Further, the eye position detection unit 21 determines whether the detected driver's eye position is a height position of high (H), medium (M), or low (L).
  • FIG. 9 is a diagram for explaining an example of eye position determination by the eye position detection unit 21 according to the first embodiment.
  • the eye position detection unit 21 uses the first threshold value TH1 (for example, 1.45 m) and the second threshold value TH2 (for example, 1.34 m) that are set in advance based on the iris, and thus the driver in the height direction.
  • the eye positions 100H, 100M, and 100L are determined.
  • “Ilipus” is an ellipse name that statistically represents the distribution of the positions of the eyes of the driver. Of the three ellipses, the larger the ellipse, the more statistically the eye positions of the driver are included.
  • the eye position detection unit 21 determines that the eye position is 100H higher, and is less than the first threshold value TH1 and the second value.
  • the threshold value TH2 is equal to or higher than the threshold value TH2
  • the threshold value is less than the second threshold value TH2
  • the eye position is determined to be 100L.
  • the height from the ground to the position of the driver's eye 100 is divided into three stages, but may be divided into any number of stages.
  • the third threshold value TH3 and the fourth threshold value TH4 in FIG. 9 will be described later.
  • the image generation unit 23 determines the display object 201 of image information to be displayed on the HUD device 3 based on various information acquired from the in-vehicle device 5.
  • the display object 201 is, for example, content such as an arrow that guides a travel route, content that highlights a white line, content that indicates that a surrounding vehicle has been detected, and the like.
  • the display object 201 is not limited to these contents.
  • the area changing unit 22 stores information on the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. From this, information on the position and size of the virtual image 200 is acquired.
  • the area changing unit 22 uses the eye position in the height direction of the driver determined by the eye position detecting unit 21 in step ST11 and the position and size of the virtual image 200 acquired from the database 24 to display the display visual recognition area 401. calculate.
  • the display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the position of the eye in the height direction may be stored in the database 24.
  • the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
  • the virtual image 200 projected on the windshield 300 may be distorted due to the distortion of the reflection mirror 32 and the windshield 300.
  • the region changing unit 22 uses the information indicating the correspondence between the eye position in the height direction and the amount of distortion of the virtual image 200 stored in the database 24 to calculate the distortion of the display object 201 in the image information. By correcting, the distortion of the display object 201 in the virtual image 200 projected onto the windshield 300 may be suppressed.
  • step ST ⁇ b> 14 the area changing unit 22 uses the information indicating the correspondence between the display object 201 and the display recommended area 402 stored in the database 24 to display according to the display object 201 determined by the image generation unit 23.
  • the recommended area 402 is specified.
  • step ST15 the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13 and the recommended display region 402 specified in step ST14, and the driver's eye determined by the eye position detection unit 21 in step ST11.
  • the superimposed display area 403 and the non-superimposed display area 404 are calculated according to the position.
  • the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
  • step ST16 the image generation unit 23 determines the display mode of the display object 201 determined in step ST12 using the superimposed display region 403 or the non-superimposed display region 404 calculated by the region changing unit 22 in step ST15. Then, the image generation unit 23 generates image information including the display object 201 having the determined display mode, and outputs the image information to the display unit 31 of the HUD device 3.
  • the HUD control device 2 includes the eye position detection unit 21, the image generation unit 23, and the region change unit 22.
  • the eye position detection unit 21 detects the position of the driver's eye 100 in the height direction.
  • the image generation unit 23 generates image information to be displayed on the display unit 31 of the HUD device 3.
  • the area changing unit 22 converts the image information generated by the image generating unit 23 into a virtual object in the foreground of the vehicle 1 according to the height direction position of the driver's eye 100 detected by the eye position detecting unit 21.
  • the superimposed display area 403 to be superimposed and displayed as 200 is changed. With this configuration, it is possible to make the driver visually recognize the virtual image 200 superimposed on a real object regardless of the position in the height direction of the driver's eye 100 without increasing the size of the HUD device 3.
  • the region changing unit 22 changes the superimposed display region 403 according to the height direction position of the driver's eye 100 and the image information generated by the image generating unit 23.
  • the area changing unit 22 can change the superimposed display area 403 more accurately by changing the superimposed display area 403 for each display object 201 included in the image information.
  • FIG. 1 Since the configuration of the HUD system 4 according to the second embodiment is the same as that shown in FIG. 1 of the first embodiment in the drawing, FIG. 1 is used below.
  • the eye position detection unit 21 detects the position of the driver's eye 100 in the depth direction in addition to the position of the driver's eye 100 in the height direction. For example, the eye position detection unit 21 uses the third threshold value TH3 and the fourth threshold value TH4 that are set in advance based on the iris shown in FIG. 9, and the positions 100F, 100C, and 100B of the driver's eyes in the depth direction. Determine.
  • the position of the driver's eyes in the depth direction is divided into three stages of a front eye position 100F, a center eye position 100C, and a rear eye position 100B by the third threshold value TH3 and the fourth threshold value TH4. Although divided, it can be divided into any number of stages.
  • FIG. 10 is a diagram illustrating a modification example of the superimposed display area 403 according to the position of the driver's eyes in the second embodiment, and illustrates a case where the position of the eyes in the depth direction is the driver behind.
  • the recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the depth direction.
  • both the front eye position 100F and the rear eye position 100B are located at a height of 1.4 m from the ground, the depth distance from the front eye position 100F to the virtual image 200 is 5 m, and the rear eye The depth distance from the position 100B to the virtual image 200 is 5.5 m.
  • the display visual recognition area 401F for the driver at the front eye position 100F and the display visual recognition area 401B for the driver at the rear eye position 100B are different.
  • the display visual recognition area 401F has a depth of 20 m to 100 m
  • the display visual recognition area 401B has a depth of 22 m to 110 m.
  • the display visual recognition areas 401F and 401B are calculated based on trigonometric functions.
  • the superimposed display area 403 and the non-superimposed display area 404 are also different depending on the position of the driver's eyes in the depth direction.
  • the superimposed display area 403F for the driver at the front eye position 100F has a depth of 20m to 100m
  • the non-superimposed display area 404F has a depth of 0m to 20m
  • the superimposed display area 403B for the driver at the rear eye position 100B has a depth of 22m to 100m
  • the non-superimposed display area 404B has a depth of 0m to 22m.
  • the eye position detection unit 21 detects the position of the driver's eye 100 in the depth direction.
  • the area changing unit 22 changes the superimposed display area 403 according to the position of the driver's eye 100 in the depth direction.
  • the region changing unit 22 depends on the position of the driver's eyes 100 in the depth direction in addition to the height direction.
  • the superimposed display area 403 is changed. Thereby, the area changing unit 22 can change the superimposed display area 403 more accurately.
  • the region changing unit 22 changes the superimposed display region 403 according to the position in the depth direction of the driver's eye 100 and the image information generated by the image generating unit 23.
  • the area changing unit 22 can change the superimposed display area 403 more accurately by changing the superimposed display area 403 for each display object 201 included in the image information.
  • FIG. 11 is a block diagram showing a main part of the HUD system 4 according to the third embodiment.
  • FIG. 12 is a configuration diagram of the HUD system 4 according to the third embodiment when mounted on a vehicle.
  • the HUD system 4 according to the third embodiment has a configuration in which an angle information acquisition unit 25 and a reflection mirror adjustment unit 33 are added to the HUD system 4 according to the first embodiment shown in FIG.
  • FIG. 11 and FIG. 12 the same or corresponding parts as those in FIG. 1 and FIG.
  • the HUD device 3 includes a reflection mirror adjustment unit 33 that adjusts the tilt angle of the reflection mirror 32.
  • the reflection mirror adjustment unit 33 is an actuator or the like.
  • the reflection mirror adjustment unit 33 adjusts the tilt angle of the reflection mirror 32 in accordance with a driver's instruction or the like.
  • the reflection mirror adjustment unit 33 outputs angle information including the adjusted tilt angle of the reflection mirror 32.
  • the HUD control device 2 includes an angle information acquisition unit 25.
  • the angle information acquisition unit 25 acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33 and outputs the angle information to the region change unit 22.
  • FIG. 13 is a diagram illustrating a correspondence relationship between the tilt angle of the reflection mirror 32 and the position of the virtual image 200 in the third embodiment.
  • the reflection mirror adjustment unit 33 changes the reflection angle of the light beam emitted from the display unit 31 on the reflection mirror 32, thereby changing the position of the virtual image 200 in the height direction and the depth direction.
  • region 401 will be changed and the reflective mirror 32 can be reduced in size.
  • the area changing unit 22 sets the height of the virtual image 200 according to the tilt angle of the reflection mirror 32.
  • the superimposed display area 403 and the non-superimposed display area 404 are calculated in consideration of the position in the direction and the depth direction.
  • the database 24 stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like.
  • information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like.
  • the position of the virtual image 200 information indicating a correspondence relationship between the tilt angle of the reflection mirror 32 and the position, size, distortion amount, and the like of the virtual image 200 is stored.
  • FIG. 14 is a flowchart showing an operation example of the HUD control device 2 according to the third embodiment. Steps ST11, ST12, ST14 and ST16 in FIG. 14 are the same operations as steps ST11, ST12, ST14 and ST16 in FIG.
  • step ST31 the angle information acquisition unit 25 acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33.
  • the region changing unit 22 acquires information on the position and size of the virtual image 200 corresponding to the tilt angle of the reflection mirror 32 acquired by the angle information acquisition unit 25 from the database 24, and specifies the position and size of the virtual image 200.
  • step ST13a the area changing unit 22 determines at least one position in the height direction or depth direction of the driver's eyes determined by the eye position detecting unit 21 in step ST11, and the position and size of the virtual image 200 specified in step ST31. Then, the display visual recognition area 401 is calculated.
  • a display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the eye position in at least one of the height direction and the depth direction may be stored in the database 24.
  • the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
  • step ST15a the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13a and the recommended display region 402 specified in step ST14, or the height direction determined by the eye position detection unit 21 in step ST11 or A superimposed display area 403 and a non-superimposed display area 404 corresponding to the position of at least one eye in the depth direction are calculated.
  • Information defining the correspondence relationship between the display object 201 and the eye position in at least one of the height direction and the depth direction and the superimposed display area 403 and the non-superimposed display area 404 may be stored in the database 24. .
  • the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
  • the HUD device 3 includes the reflection mirror adjustment unit 33 that adjusts the tilt angle of the reflection mirror 32.
  • the HUD control device 2 includes an angle information acquisition unit 25 that acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33.
  • the region changing unit 22 changes the superimposed display region 403 according to the position of the driver's eye 100 and the tilt angle of the reflecting mirror 32 acquired by the angle information acquiring unit 25.
  • the area changing unit 22 can change the superimposed display area 403 in response to the change in the display visual recognition area 401 accompanying the change in the tilt angle of the reflection mirror 32. Therefore, the area changing unit 22 can change the superimposed display area 403 more accurately.
  • FIG. 15 is a block diagram illustrating a main part of the HUD system 4 according to the fourth embodiment.
  • FIG. 16 is a configuration diagram when the HUD system 4 according to the fourth embodiment is mounted on a vehicle.
  • the HUD system 4 according to the fourth embodiment has a configuration in which a position information acquisition unit 26 and a HUD device position adjustment unit 34 are added to the HUD system 4 according to the first embodiment shown in FIG.
  • FIG. 15 and FIG. 16 the same or corresponding parts as those in FIG. 1 and FIG.
  • the HUD device 3 includes a HUD device position adjustment unit 34 that adjusts the tilt angle or the depth position of the HUD device 3 or a part of the HUD device 3.
  • the HUD device position adjustment unit 34 is an actuator or the like.
  • the HUD device position adjustment unit 34 is a tilt angle or depth position of the display unit 31, a tilt angle or depth position of the reflection mirror 32, or a HUD device 3 incorporating the display unit 31 and the reflection mirror 32 in accordance with a driver's instruction or the like. At least one of the tilt angle or the depth position of the casing is adjusted.
  • the HUD device position adjustment unit 34 outputs position information including the HUD device 3 or the adjusted tilt angle or depth position of a part of the HUD device 3.
  • the HUD control device 2 includes a position information acquisition unit 26.
  • the position information acquisition unit 26 acquires position information including the tilt angle or depth position of the HUD device 3 or a part of the HUD device 3 from the HUD device position adjustment unit 34 and outputs the position information to the region change unit 22.
  • FIG. 17 is a diagram illustrating a correspondence relationship between the depth position of the HUD device 3 and the position of the virtual image 200 in the fourth embodiment.
  • the position of the virtual image 200 in the height direction and the depth direction is changed by the HUD device position adjusting unit 34 changing the depth position of the housing 35 of the HUD device 3.
  • the area changing unit 22 calculates the superimposed display area 403 and the non-superimposed display area 404 in consideration of the depth position of the housing 35 of the HUD device 3.
  • the region changing unit 22 calculates the superimposed display region 403 and the non-superimposed display region 404 in consideration of these depth positions and tilt angles.
  • the database 24 stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. . Furthermore, in the fourth embodiment, as the position of the virtual image 200, the tilt angle or depth position of the display unit 31, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3 is selected. Information indicating a correspondence relationship between at least one and the position, size, amount of distortion, and the like of the virtual image 200 is stored.
  • FIG. 18 is a flowchart showing an operation example of the HUD control device 2 according to the fourth embodiment. Steps 11, ST12, ST14 and ST16 in FIG. 18 are the same operations as steps ST11, ST12, ST14 and ST16 in FIG.
  • the position information acquisition unit 26 obtains at least one of the tilt angle or depth position of the display unit 31, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3.
  • the included position information is acquired from the HUD device position adjustment unit 34.
  • the region changing unit 22 is at least one of the tilt angle or depth position of the display unit 31 acquired by the position information acquisition unit 26, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3.
  • Information on the position and size of the virtual image 200 corresponding to one is acquired from the database 24, and the position and size of the virtual image 200 are specified.
  • step ST13b the area changing unit 22 determines the position and size of the virtual image 200 identified in step ST41, and at least one position in the height direction or depth direction of the driver's eyes determined by the eye position detection unit 21 in step ST11. Then, the display visual recognition area 401 is calculated.
  • a display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the eye position in at least one of the height direction and the depth direction may be stored in the database 24.
  • the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
  • step ST15b the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13b and the recommended display region 402 specified in step ST14, or the height direction determined by the eye position detection unit 21 in step ST11 or A superimposed display area 403 and a non-superimposed display area 404 corresponding to the position of at least one eye in the depth direction are calculated.
  • Information defining the correspondence relationship between the display object 201 and the eye position in at least one of the height direction and the depth direction and the superimposed display area 403 and the non-superimposed display area 404 may be stored in the database 24. .
  • the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
  • the HUD device 3 includes the HUD device position adjustment unit 34 that adjusts at least one of the depth position or the tilt angle of the entire HUD device 3 or a part of the HUD device 3.
  • the HUD control device 2 includes a position information acquisition unit 26 that acquires position information including at least one of the depth position and the tilt angle of the entire HUD device 3 or a part of the HUD device 3 from the HUD device position adjustment unit 34.
  • the region changing unit 22 displays a superimposed image according to at least one of the position of the driver's eye 100 and the depth position or tilt angle of the entire HUD device 3 or a part of the HUD device 3 acquired by the position information acquisition unit 26.
  • the area 403 is changed.
  • the area changing unit 22 can change the superimposed display area 403 in response to the change of the display visual recognition area 401 accompanying the position change or the angle change of the HUD device 3 or a part of the HUD device 3. Therefore, the area changing unit 22 can change the superimposed display area 403 more accurately.
  • the HUD control device 2 is configured to control the HUD device 3, but may be configured to control an HMD (Head-Mounted Display) device. That is, the control target of the HUD control device 2 may be a display device that can display a stereoscopic image, such as HUD and HMD.
  • HMD Head-Mounted Display
  • FIGS. 19A and 19B are diagrams illustrating a hardware configuration example of the HUD control device 2 according to each embodiment.
  • the database 24 in the HUD control device 2 is a memory 1001.
  • the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 in the HUD control device 2 are realized by a processing circuit. That is, the HUD control device 2 includes a processing circuit for realizing the above functions.
  • the processing circuit may be the processing circuit 1000 as dedicated hardware, or may be the processor 1002 that executes a program stored in the memory 1001.
  • the processing circuit 1000 when the processing circuit is dedicated hardware, includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit). ), FPGA (Field Programmable Gate Array), or a combination thereof.
  • the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 may be realized by a plurality of processing circuits 1000.
  • a single processing circuit 1000 may be used.
  • the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 are software. , Firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 1001.
  • the processor 1002 reads out and executes the program stored in the memory 1001, thereby realizing the function of each unit. That is, the HUD control device 2 includes a memory 1001 for storing a program that, when executed by the processor 1002, results in the steps shown in the flowchart of FIG. It can also be said that this program causes a computer to execute the procedures or methods of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26.
  • the processor 1002 is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, or the like.
  • the memory 1001 may be a non-volatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), a flash memory, or a hard disk or a flexible disk.
  • the magnetic disk may be a CD (Compact Disc) or a DVD (Digital Versatile Disc).
  • a part is implement
  • the processing circuit in the HUD control device 2 can realize the above-described functions by hardware, software, firmware, or a combination thereof.
  • the HUD control device 2 has a function of controlling only the HUD device 3, but in addition to the function of controlling the HUD device 3, one or more display devices (for example, a center display) are provided.
  • the structure which also has the function to control may be sufficient.
  • the HUD control device 2 may be incorporated in a display control device that controls various display devices such as the HUD device 3 and the center display mounted on the vehicle.
  • the HUD control device 2 is configured to display the display object 201 on the HUD device 3 as the virtual image 200.
  • the HUD control device 2 may be configured to output information related to the display object 201 from a speaker. For example, when the HUD control device 2 presents to the driver information that guides the vehicle 1 to turn left at an intersection 75 m deep, the virtual display 200 displays an image that guides the operator to turn left at the intersection. The display object 201 is displayed on the HUD device 3, and in the non-overlapping display area 404, a sound is output from a speaker for guiding the user to turn left at the intersection.
  • the HUD control device is adapted to the difference in the position of the eyes of the driver without increasing the size of the HUD device, and is therefore used for a HUD control device for controlling an AR-HUD device for a vehicle and the like. Suitable for
  • HUD control device 3 HUD device, 4 HUD system, 5 in-vehicle device, 21 eye position detection unit, 22 region change unit, 23 image generation unit, 24 database, 25 angle information acquisition unit, 26 position information acquisition unit , 31 display unit, 32 reflection mirror, 33 reflection mirror adjustment unit, 34 HUD device position adjustment unit (position adjustment unit), 35 housing, 51 in-vehicle camera, 52 outside camera, 53 GPS receiver, 54 radar sensor, 55 ECU 56, wireless communication device, 57 navigation device, 100 driver's eye, 100B, 100C, 100F, 100H, 100L, 100M driver's eye position, 200, 200a virtual image, 201 display object, 202, 203 display viewing area, 300 Windshield (projected surface), 401, 40 B, 401F, 401H, 401L, 401M Display viewing area, 402 Recommended display area, 403, 403B, 403F, 403H, 403L, 403M Overlaid display area, 404, 404B, 404F,

Abstract

An HUD device (3) is provided with: a display unit (31) which displays image information; and a reflective mirror (32) which reflects the image information displayed by the display unit (31) so as to project the image information onto a windshield (300), wherein the image information is superimposed and displayed as a virtual image (200) on the foreground of a vehicle (1) visually recognized by a driver. An HUD control device (2) is provided with: an eye position detection unit (21) which detects the positions of the eyes (100) of the driver; an image generation unit (23) which generates image information to be displayed on the display unit (31); and a region change unit (22) which, according to the positions of the eyes (100) of the driver detected by the eye position detection unit (21), changes a superimposition display region (403) where superimposition display of the image information generated by the image generation unit (23) as the virtual image (200) is performed on a real object located in the foreground of the vehicle (1).

Description

ヘッドアップディスプレイ制御装置、ヘッドアップディスプレイシステム、及びヘッドアップディスプレイ制御方法Head-up display control device, head-up display system, and head-up display control method
 この発明は、車両用のヘッドアップディスプレイ装置を制御するヘッドアップディスプレイ制御装置、ヘッドアップディスプレイ装置及びヘッドアップディスプレイ制御装置を備えるヘッドアップディスプレイシステム、並びに、ヘッドアップディスプレイ装置を制御するヘッドアップディスプレイ制御方法に関するものである。 The present invention relates to a head-up display control device that controls a head-up display device for a vehicle, a head-up display system that includes a head-up display device, a head-up display control device, and a head-up display control that controls the head-up display device. It is about the method.
 車両用のHUD(Head Up Display)装置は、運転者が前方視野から大きく視線を動かすことなく、画像情報を視認可能な表示装置である。特にAR(Augmented Reality:拡張現実)を利用したAR-HUD装置は、道路等の現実物体に対して道案内矢印などの画像情報を重畳表示することで、既存のHUD装置に比べて直感的かつ分かりやすく、運転者に情報を提示できる(例えば、特許文献1参照)。 A vehicle HUD (Head Up Display) device is a display device that allows a driver to visually recognize image information without greatly moving his / her line of sight from the front visual field. In particular, an AR-HUD device using AR (Augmented Reality) is more intuitive than an existing HUD device by superimposing and displaying image information such as a road guide arrow on a real object such as a road. Information can be presented to the driver in an easy-to-understand manner (see, for example, Patent Document 1).
国際公開第2017/134865号International Publication No. 2017/134865
 特許文献1に記載されたAR-HUD装置は、プロジェクタ又は液晶ディスプレイ等からなる映像表示装置に表示された映像を、ミラーにより反射させて、車両のウインドシールドに投射する構成である。運転者は、ウインドシールドに投射された映像を見ることで、透明のウインドシールドを通してその前方に虚像として上記映像を視認する。 The AR-HUD device described in Patent Document 1 has a configuration in which an image displayed on an image display device such as a projector or a liquid crystal display is reflected by a mirror and projected onto a windshield of a vehicle. The driver views the image projected as a virtual image in front of the transparent windshield by viewing the image projected on the windshield.
 AR-HUD装置が現実物体に虚像を重畳表示するためには、運転者が虚像を快適に視認できる表示推奨領域内に、運転者が虚像を視認できる表示視認領域が存在する必要がある。運転者の眼の位置ごとに虚像に対する運転者の眼の高さと見下ろし角度とが異なるため、運転者の眼の位置ごとに表示視認領域も異なる。そのため、運転者の眼の位置に関わらず運転者に虚像を視認させるためには、AR-HUD装置は、虚像を大きくすることによって表示視認領域を大きくする必要がある。しかし、虚像を大きくするためには、上記映像表示装置及び上記ミラー等を大型化する必要があるため、結果としてAR-HUD装置が大型化してしまうという課題があった。AR-HUD装置を設置する車両側スペースには制約があるため、AR-HUD装置の大型化は好ましくない。 In order for the AR-HUD device to superimpose and display a virtual image on a real object, a display visual recognition area where the driver can visually recognize the virtual image needs to exist within the recommended display area where the driver can comfortably visually recognize the virtual image. Since the driver's eye height and the look-down angle with respect to the virtual image are different for each position of the driver's eyes, the display viewing area is also different for each position of the driver's eyes. Therefore, in order for the driver to visually recognize a virtual image regardless of the position of the driver's eyes, the AR-HUD device needs to enlarge the display visual recognition area by increasing the virtual image. However, in order to enlarge the virtual image, it is necessary to increase the size of the video display device, the mirror, and the like. As a result, there is a problem that the AR-HUD device is increased in size. Since the space on the vehicle side where the AR-HUD device is installed is limited, it is not preferable to increase the size of the AR-HUD device.
 この発明は、上記のような課題を解決するためになされたもので、ヘッドアップディスプレイ装置を大型化させずに、運転者の眼の位置に関わらず、運転者に虚像を現実物体に重畳した状態に視認させることを目的とする。 The present invention has been made in order to solve the above-described problems. A virtual image is superimposed on a real object to the driver regardless of the position of the driver's eyes without increasing the size of the head-up display device. The purpose is to make the state visible.
 この発明に係るヘッドアップディスプレイ制御装置は、画像情報を表示する表示部と、表示部により表示された画像情報を反射して被投射面に投射する反射ミラーとを備え、運転者により視認される車両の前景に画像情報を虚像として重畳表示するヘッドアップディスプレイ装置を制御するヘッドアップディスプレイ制御装置であって、運転者の眼の位置を検出する眼位置検出部と、表示部に表示させる画像情報を生成する画像生成部と、眼位置検出部により検出された運転者の眼の位置に応じて、画像生成部により生成された画像情報を車両の前景における現実物体に虚像として重畳表示する重畳表示領域を変更する領域変更部とを備えるものである。 A head-up display control device according to the present invention includes a display unit that displays image information, and a reflection mirror that reflects the image information displayed by the display unit and projects the image information onto a projection surface, and is visually recognized by a driver. A head-up display control device that controls a head-up display device that superimposes and displays image information as a virtual image on the foreground of a vehicle, an eye position detection unit that detects the position of the driver's eyes, and image information that is displayed on the display unit The image generation unit that generates the image and the superimposed display that displays the image information generated by the image generation unit as a virtual image superimposed on the real object in the foreground of the vehicle according to the position of the driver's eyes detected by the eye position detection unit And an area changing unit for changing the area.
 この発明によれば、運転者の眼の位置に応じて、画像情報を車両の前景における現実物体に虚像として重畳表示する重畳表示領域を変更するようにしたので、ヘッドアップディスプレイ装置を大型化させずに、運転者の眼の位置に関わらず、運転者に虚像を現実物体に重畳した状態に視認させることができる。 According to the present invention, the superimposition display area for superimposing and displaying the image information as a virtual image on the real object in the foreground of the vehicle is changed according to the position of the eyes of the driver. Instead, regardless of the position of the driver's eyes, the driver can visually recognize the virtual image superimposed on the real object.
実施の形態1に係るHUDシステムの要部を示すブロック図である。It is a block diagram which shows the principal part of the HUD system which concerns on Embodiment 1. FIG. 実施の形態1に係るHUDシステムの車両搭載時の構成図である。It is a block diagram at the time of vehicle mounting of the HUD system which concerns on Embodiment 1. FIG. 運転者の眼の高さ方向の位置に応じた表示視認領域の違いを説明する図である。It is a figure explaining the difference of the display visual recognition area according to the position of the height direction of a driver | operator's eyes. 高さ方向における眼の位置が中程度の運転者が車両前景を視認した状態を示す図である。It is a figure which shows the state which the driver | operator with a medium eye position in the height direction visually recognized the vehicle foreground. 高さ方向における眼の位置が高い運転者が車両前景を視認した状態を示す図である。It is a figure which shows the state which the driver | operator with a high eye position in the height direction visually recognized the vehicle foreground. 高さ方向における眼の位置が低い運転者が車両前景を視認した状態を示す図である。It is a figure which shows the state which the driver | operator with a low eye position in a height direction visually recognized the vehicle foreground. 実施の形態1に係るHUDシステムの理解を助けるための参考例であり、図3に比べて虚像の高さ方向の大きさを大きくした場合の、運転者の眼の高さ方向の位置に応じた表示視認領域の違いを説明する図である。FIG. 4 is a reference example for helping understanding of the HUD system according to the first embodiment, in accordance with the position in the height direction of the driver's eyes when the height direction of the virtual image is increased as compared with FIG. 3. It is a figure explaining the difference in the display visual recognition area. 実施の形態1に係るHUDシステムの理解を助けるための参考例であり、図3に比べて虚像の高さ方向の大きさを大きくした場合の、高さ方向における眼の位置が高い運転者が車両前景を視認した状態を示す図である。It is a reference example for helping understanding of the HUD system according to the first embodiment, and a driver whose eye position in the height direction is high when the size of the virtual image in the height direction is larger than that in FIG. It is a figure which shows the state which visually recognized the vehicle foreground. 実施の形態1に係るHUDシステムの理解を助けるための参考例であり、図3に比べて虚像の高さ方向の大きさを大きくした場合の、高さ方向における眼の位置が低い運転者が車両前景を視認した状態を示す図である。It is a reference example for helping understanding of the HUD system according to the first embodiment, and a driver whose eye position in the height direction is low when the size of the virtual image in the height direction is larger than that in FIG. It is a figure which shows the state which visually recognized the vehicle foreground. 実施の形態1において運転者の眼の位置に応じた重畳表示領域の変更例を示す図であり、高さ方向における眼の位置が高い運転者の場合を示す。FIG. 8 is a diagram illustrating a change example of the superimposed display area according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a high eye position in the height direction. 実施の形態1において、高さ方向における眼の位置が中程度の運転者が車両前景を視認した状態を示す図である。In Embodiment 1, it is a figure which shows the state which the driver | operator with a medium eye position in the height direction visually recognized the vehicle foreground. 実施の形態1において、高さ方向における眼の位置が高い運転者が車両前景を視認した状態を示す図である。In Embodiment 1, it is a figure which shows the state which the driver | operator with a high eye position in the height direction visually recognized the vehicle foreground. 実施の形態1において運転者の眼の位置に応じた重畳表示領域の変更例を示す図であり、高さ方向における眼の位置が低い運転者の場合を示す。FIG. 8 is a diagram illustrating a change example of the superimposed display area according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a low eye position in the height direction. 実施の形態1において、高さ方向における眼の位置が中程度の運転者が車両前景を視認した状態を示す図である。In Embodiment 1, it is a figure which shows the state which the driver | operator with a medium eye position in the height direction visually recognized the vehicle foreground. 実施の形態1において、高さ方向における眼の位置が低い運転者が車両前景を視認した状態を示す図である。In Embodiment 1, it is a figure which shows the state which the driver | operator with a low eye position in the height direction visually recognized the vehicle foreground. 実施の形態1に係るHUD制御装置の動作例を示すフローチャートである。3 is a flowchart illustrating an operation example of the HUD control device according to the first embodiment. 実施の形態1における眼位置検出部による眼の位置の判定例を説明する図である。6 is a diagram illustrating an example of eye position determination performed by an eye position detection unit according to Embodiment 1. FIG. 実施の形態2において運転者の眼の位置に応じた重畳表示領域の変更例を示す図であり、奥行き方向における眼の位置が後方の運転者の場合を示す。In Embodiment 2, it is a figure which shows the example of a change of the superimposition display area according to the position of a driver | operator's eye, and shows the case where the position of the eye in a depth direction is a driver | operator of back. 実施の形態3に係るHUDシステムの要部を示すブロック図である。FIG. 10 is a block diagram illustrating a main part of a HUD system according to a third embodiment. 実施の形態3に係るHUDシステムの車両搭載時の構成図である。FIG. 10 is a configuration diagram when the HUD system according to Embodiment 3 is mounted on a vehicle. 実施の形態3において、反射ミラーのチルト角度と虚像の位置との対応関係を示す図である。In Embodiment 3, it is a figure which shows the correspondence of the tilt angle of a reflective mirror, and the position of a virtual image. 実施の形態3に係るHUD制御装置の動作例を示すフローチャートである。12 is a flowchart illustrating an operation example of the HUD control device according to the third embodiment. 実施の形態4に係るHUDシステムの要部を示すブロック図である。It is a block diagram which shows the principal part of the HUD system which concerns on Embodiment 4. FIG. 実施の形態4に係るHUDシステムの車両搭載時の構成図である。It is a block diagram at the time of vehicle mounting of the HUD system which concerns on Embodiment 4. FIG. 実施の形態4において、HUD装置の奥行き位置と虚像の位置との対応関係を示す図である。In Embodiment 4, it is a figure which shows the correspondence of the depth position of a HUD apparatus, and the position of a virtual image. 実施の形態4に係るHUD制御装置の動作例を示すフローチャートである。10 is a flowchart illustrating an operation example of the HUD control device according to the fourth embodiment. 各実施の形態に係るHUD制御装置のハードウェア構成例を示す図である。It is a figure which shows the hardware structural example of the HUD control apparatus which concerns on each embodiment. 各実施の形態に係るHUD制御装置のハードウェア構成例を示す図である。It is a figure which shows the hardware structural example of the HUD control apparatus which concerns on each embodiment.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、実施の形態1に係るHUDシステム4の要部を示すブロック図である。図2は、実施の形態1に係るHUDシステム4の車両搭載時の構成図である。図1及び図2に示されるように、車両1には、HUD制御装置2及びHUD装置3を含むHUDシステム4と、車載装置5とが搭載される。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram showing a main part of the HUD system 4 according to the first embodiment. FIG. 2 is a configuration diagram of the HUD system 4 according to Embodiment 1 when mounted on a vehicle. As shown in FIGS. 1 and 2, a vehicle 1 is equipped with a HUD system 4 including a HUD control device 2 and a HUD device 3, and an in-vehicle device 5.
 HUD装置3は、表示部31及び反射ミラー32を備える。表示部31は、HUD制御装置2が生成した画像情報を表示する。この表示部31には、液晶ディスプレイ等のディスプレイ、プロジェクタ、又はレーザ光源等が用いられる。反射ミラー32は、表示部31により表示された画像情報の表示光を反射して、ウインドシールド300に投射する。運転者は、眼100の位置からウインドシールド300越しに結合された虚像200の表示物201を視認する。ウインドシールド300は、虚像200の被投射面である。被投射面は、ウインドシールド300に限定されるものではなく、コンバイナと呼ばれるハーフミラー等であってもよい。 The HUD device 3 includes a display unit 31 and a reflection mirror 32. The display unit 31 displays image information generated by the HUD control device 2. As the display unit 31, a display such as a liquid crystal display, a projector, or a laser light source is used. The reflection mirror 32 reflects the display light of the image information displayed by the display unit 31 and projects it onto the windshield 300. The driver visually recognizes the display object 201 of the virtual image 200 that is coupled through the windshield 300 from the position of the eye 100. The windshield 300 is a projection surface of the virtual image 200. The projected surface is not limited to the windshield 300, and may be a half mirror called a combiner.
 HUD制御装置2は、眼位置検出部21、領域変更部22、画像生成部23、及びデータベース24を備える。 The HUD control device 2 includes an eye position detection unit 21, a region change unit 22, an image generation unit 23, and a database 24.
 眼位置検出部21は、後述する車内カメラ51により撮像された運転者の撮像画像情報を取得し、取得した撮像画像情報を解析し、運転者の眼100の高さ方向の位置を検出する。眼位置検出部21は、運転者の眼100の位置として、運転者の左眼及び右眼それぞれの位置を検出してもよいし、左眼と右眼の中心位置を検出してもよい。また、眼位置検出部21は、左眼と右眼の中心位置を、撮像画像情報における運転者の顔位置から推定してもよい。 The eye position detection unit 21 acquires captured image information of a driver captured by an in-vehicle camera 51 described later, analyzes the acquired captured image information, and detects the position of the driver's eye 100 in the height direction. The eye position detection unit 21 may detect the positions of the left eye and the right eye of the driver as the position of the driver's eye 100, or may detect the center positions of the left eye and the right eye. Further, the eye position detection unit 21 may estimate the center positions of the left eye and the right eye from the driver's face position in the captured image information.
 領域変更部22は、眼位置検出部21により検出された運転者の眼100の位置に応じて、重畳表示領域を変更する。
 具体的には、領域変更部22は、運転者の眼100の位置及び虚像200の位置に基づき、表示視認領域を算出する。また、領域変更部22は、虚像200の表示物201に応じた表示推奨領域を特定し、表示視認領域と表示推奨領域とに基づき、重畳表示領域を変更する。また、領域変更部22は、表示視認領域と表示推奨領域とに基づき、非重畳表示領域も変更する。
The area changing unit 22 changes the superimposed display area according to the position of the driver's eye 100 detected by the eye position detecting unit 21.
Specifically, the area changing unit 22 calculates a display visual recognition area based on the position of the driver's eye 100 and the position of the virtual image 200. Further, the region changing unit 22 specifies a display recommended region corresponding to the display object 201 of the virtual image 200, and changes the superimposed display region based on the display visual recognition region and the display recommended region. The area changing unit 22 also changes the non-superimposed display area based on the display visual recognition area and the display recommended area.
 ここで、図3を参照して、表示視認領域、表示推奨領域、重畳表示領域、及び非重畳表示領域を説明する。図3は、運転者の眼の高さ方向の位置100H,100M,100Lに応じた表示視認領域401H,401M,401Lの違いを説明する図である。図3に示されるように、表示視認領域、表示推奨領域、重畳表示領域、及び非重畳表示領域は、運転者の眼の位置を0mとして、運転者の眼の位置から前方へ向かう奥行き距離として表される。 Here, the display visual recognition area, the display recommendation area, the superimposed display area, and the non-superimposed display area will be described with reference to FIG. FIG. 3 is a diagram for explaining a difference between the display visual recognition areas 401H, 401M, and 401L corresponding to the positions 100H, 100M, and 100L in the height direction of the driver's eyes. As shown in FIG. 3, the display visual recognition area, the display recommendation area, the superimposed display area, and the non-superimposed display area are assumed to be depth distances from the driver's eyes to the front, with the driver's eyes positioned at 0 m. expressed.
 表示視認領域401は、運転者が、虚像200の表示物201を車両前景の現実物体に重畳した状態に視認できる領域であり、運転者の眼の高さ方向の位置等に応じて異なる。この例では、運転者の眼の高さ方向の位置が、高(H)、中程度(M)、低(L)の三段階に分けられたこととし、高い眼の位置100H,中程度の眼の位置100M,低い眼の位置100Lと呼ぶ。高い眼の位置100Hは1.46m、中程度の眼の位置100Mは1.4m、低い眼の位置100Lは1.34mとする。運転者が、眼の位置100H,100M,100Lから奥行き5mの位置にある、高さ方向の大きさが0.28mの虚像200を視認する場合、高い眼の位置100Hでは表示視認領域401Hが18m~56m、中程度の眼の位置100Mでは表示視認領域401Mが20m~100m、低い眼の位置100Lでは表示視認領域401Lが23m~670mになる。これらの表示視認領域401H,401M,401Lは、三角関数に基づいて算出される。 The display visual recognition area 401 is an area where the driver can visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the vehicle foreground, and differs depending on the position of the driver's eyes in the height direction. In this example, the position in the height direction of the driver's eyes is divided into three stages of high (H), medium (M), and low (L), and the high eye position 100H is medium. It is called the eye position 100M and the low eye position 100L. The high eye position 100H is 1.46 m, the medium eye position 100M is 1.4 m, and the low eye position 100L is 1.34 m. When the driver views the virtual image 200 having a height of 0.28 m at a depth of 5 m from the eye positions 100H, 100M, and 100L, the display viewing area 401H is 18 m at the high eye position 100H. The display visual recognition area 401M is 20m to 100m at the medium eye position 100M to 56m, and the display visual recognition area 401L is 23m to 670m at the low eye position 100L. These display visual recognition areas 401H, 401M, and 401L are calculated based on trigonometric functions.
 表示推奨領域402は、運転者が快適に、虚像200の表示物201を視認できる領域である。この表示推奨領域402は、虚像200の表示物201に応じて予め定められているものとする。例えば、車両1が奥行き75m先の交差点を左折することを案内する画像情報が虚像200の表示物201として表示される場合、奥行き0m~100mが表示推奨領域402となる。
 現実物体に表示物201が重畳した状態に視認させるためには、表示視認領域401が表示推奨領域402を包含することが望ましい。しかし、運転者の眼の位置100H,100M,100Lに応じて異なる表示視認領域401H,401M,401Lが、表示物201に応じて異なる表示推奨領域402を、常に包含するとは限らない。そのため、実施の形態1では、領域変更部22は、表示推奨領域402を、表示視認領域401に包含される重畳表示領域と、表示視認領域に包含されない非重畳表示領域とに分割して扱う。
The display recommendation area 402 is an area where the driver can comfortably visually recognize the display object 201 of the virtual image 200. This display recommendation area 402 is predetermined according to the display object 201 of the virtual image 200. For example, when the image information that guides the vehicle 1 to turn left at an intersection 75 m deep is displayed as the display object 201 of the virtual image 200, the depth 0 m to 100 m is the recommended display area 402.
In order to make the display object 201 visible in a state where the display object 201 is superimposed on the real object, it is desirable that the display visual recognition area 401 includes the display recommendation area 402. However, the display visual recognition areas 401H, 401M, and 401L that differ depending on the driver's eye positions 100H, 100M, and 100L do not always include the display recommendation areas 402 that differ depending on the display object 201. Therefore, in the first embodiment, the area changing unit 22 handles the display recommendation area 402 by dividing it into a superimposed display area included in the display visual recognition area 401 and a non-superimposed display area not included in the display visual recognition area.
 重畳表示領域403は、運転者が、虚像200の表示物201を車両前景の現実物体に重畳した状態に視認できる領域であり、一例として、表示視認領域401と表示推奨領域402とが重なる領域である。図3において、表示物201の表示推奨領域402を奥行き0m~100m、運転者の眼100の高さ方向における位置を中程度の位置100Mと仮定すると、重畳表示領域403は、奥行き20m~100mである。 The superimposed display area 403 is an area in which the driver can visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the foreground of the vehicle. As an example, the superimposed display area 403 is an area where the display visual recognition area 401 and the recommended display area 402 overlap. is there. In FIG. 3, assuming that the recommended display area 402 of the display object 201 is 0 m to 100 m in depth and the position in the height direction of the driver's eye 100 is an intermediate position 100M, the superimposed display area 403 is 20 m to 100 m in depth. is there.
 非重畳表示領域404は、運転者が、虚像200の表示物201を車両前景の現実物体に重畳した状態に視認できない領域であり、一例として、表示視認領域401と表示推奨領域402とが重ならない領域である。なお、非重畳表示領域404は、運転者が、虚像200の表示物201を車両前景の現実物体に重畳した状態には視認できないが、車両前景に重畳した状態には視認できる。図3において、表示物201の表示推奨領域402を奥行き0m~100m、運転者の眼100の高さ方向における位置を中程度の位置100Mと仮定すると、非重畳表示領域404Mは、奥行き0m~20mである。 The non-overlapping display area 404 is an area where the driver cannot visually recognize the display object 201 of the virtual image 200 superimposed on a real object in the foreground of the vehicle. As an example, the display visual recognition area 401 and the recommended display area 402 do not overlap. It is an area. Note that the non-superimposed display area 404 cannot be visually recognized when the driver superimposes the display object 201 of the virtual image 200 on the real object in the vehicle foreground, but can be visually recognized when superimposed on the vehicle foreground. In FIG. 3, assuming that the recommended display area 402 of the display object 201 is 0 m to 100 m in depth and the position in the height direction of the driver's eye 100 is an intermediate position 100M, the non-superimposed display area 404M is 0 m to 20 m in depth. It is.
 図4Aは、高さ方向における眼の位置が中程度の運転者が車両前景を視認した状態を示す図である。図4Bは、高さ方向における眼の位置が高い運転者が車両前景を視認した状態を示す図である。図4Cは、高さ方向における眼の位置が低い運転者が車両前景を視認した状態を示す図である。図4A、図4B、及び図4Cは、実施の形態1の重畳表示領域403H,403M,403L(後述する図6A等)の理解を助けるための参考例であり、図4A、図4B、及び図4Cにおける重畳表示領域403は、運転者の眼の位置100H,100M,100Lによらず同じである。 FIG. 4A is a diagram showing a state in which a driver with a medium eye position in the height direction visually recognizes the vehicle foreground. FIG. 4B is a diagram illustrating a state where a driver with a high eye position in the height direction visually recognizes the vehicle foreground. FIG. 4C is a diagram illustrating a state where a driver with a low eye position in the height direction visually recognizes the vehicle foreground. 4A, 4B, and 4C are reference examples for helping understanding of the superimposed display areas 403H, 403M, and 403L (FIG. 6A and the like described later) of the first embodiment, and FIG. 4A, FIG. 4B, and FIG. The superimposed display area 403 in 4C is the same regardless of the driver's eye positions 100H, 100M, and 100L.
 図5Aは、実施の形態1に係るHUDシステム4の理解を助けるための参考例であり、図3に比べて虚像200aの高さ方向の大きさを大きくした場合の、運転者の眼の高さ方向の位置に応じた表示視認領域401の違いを説明する図である。図5Bは、実施の形態1に係るHUDシステム4の理解を助けるための参考例であり、図3に比べて虚像200aの高さ方向の大きさを大きくした場合の、高さ方向における眼の位置が高い運転者が車両前景を視認した状態を示す図である。図5Cは、実施の形態1に係るHUDシステム4の理解を助けるための参考例であり、図3に比べて虚像200aの高さ方向の大きさを大きくした場合の、高さ方向における眼の位置が低い運転者が車両前景を視認した状態を示す図である。 FIG. 5A is a reference example for helping understanding of the HUD system 4 according to the first embodiment. The height of the driver's eye when the size in the height direction of the virtual image 200a is larger than that in FIG. It is a figure explaining the difference of the display visual recognition area 401 according to the position of a direction. FIG. 5B is a reference example for helping understanding of the HUD system 4 according to the first embodiment. When the size of the virtual image 200a in the height direction is larger than that in FIG. It is a figure which shows the state which the driver | operator with a high position visually recognized the vehicle foreground. FIG. 5C is a reference example for helping understanding of the HUD system 4 according to the first embodiment. When the size of the virtual image 200a in the height direction is larger than that in FIG. It is a figure which shows the state which the driver | operator with a low position visually recognized the vehicle foreground.
 前述のとおり、運転者の眼の位置100H,100M,100Lに応じて、表示視認領域401H,401M,401Lが異なり、表示視認領域401H,401M,401Lに応じて、重畳表示領域403も異なる。しかしながら、参考例では、眼の位置100H,100M,100Lに応じた重畳表示領域403の違いを考慮していない。そのため、例えば、図4Aのように、中程度の眼の位置100Mの運転者は、表示視認領域401M内において重畳表示領域403を全領域視認できる一方、図4Bのように、運転者の眼の位置100Hが高い場合、表示視認領域401H内に重畳表示領域403が収まりきらず、虚像200の表示物201を車両前景の現実物体に重畳表示できない重畳表示不可領域410が存在する。同様に、図4Cのように、運転者の眼の位置100Lでも、表示視認領域401L内に重畳表示領域403が収まりきらず、虚像200の表示物201を車両前景の現実物体に重畳表示できない重畳表示不可領域410が存在する。 As described above, the display visual recognition areas 401H, 401M, and 401L are different according to the positions 100H, 100M, and 100L of the driver's eyes, and the superimposed display area 403 is also different according to the display visual recognition areas 401H, 401M, and 401L. However, in the reference example, the difference in the superimposed display area 403 corresponding to the eye positions 100H, 100M, and 100L is not considered. Therefore, for example, as shown in FIG. 4A, the driver at the middle eye position 100M can visually recognize the entire superimposed display area 403 in the display visual recognition area 401M, while the driver's eyes can be seen as shown in FIG. 4B. When the position 100H is high, the superimposed display area 403 does not fit in the display visual recognition area 401H, and there is a superimposed display impossible area 410 in which the display object 201 of the virtual image 200 cannot be superimposed and displayed on the real object in the vehicle foreground. Similarly, as shown in FIG. 4C, the superimposed display area 403 does not fit in the display visual recognition area 401L even at the position 100L of the driver's eyes, and the superimposed display cannot display the display object 201 of the virtual image 200 on the real object in the vehicle foreground. A disabled area 410 exists.
 高い眼の位置100H及び低い眼の位置100Lにおいても、重畳表示領域403を奥行き20m~100mに維持するためには、参考例の図5Aのように、虚像200aの高さ方向の大きさを図3の虚像200に比べて大きくする必要がある。虚像200aの高さ方向の大きさを0.28mから0.38mにすることにより、表示視認領域401Hが奥行き18m~56mから奥行き18m~100mに広がると共に、表示視認領域401Lが奥行き23m~670mから奥行き20m~670mに広がる。即ち、図5Bに示されるように、高さ方向における眼の位置100Hが高い運転者用の表示視認領域202が虚像200の上端に追加される。また、図5Cに示されるように、高さ方向における眼の位置100Lが低い運転者用の表示視認領域203が虚像200の下端に追加される。 In order to maintain the superimposed display area 403 at a depth of 20 m to 100 m even at the high eye position 100H and the low eye position 100L, the size of the virtual image 200a in the height direction is illustrated as in FIG. 5A of the reference example. 3 is required to be larger than the virtual image 200. When the size of the virtual image 200a in the height direction is changed from 0.28 m to 0.38 m, the display visual recognition area 401H is expanded from the depth of 18 m to 56 m to the depth of 18 m to 100 m, and the display visual recognition area 401L is changed from the depth of 23 m to 670 m. The depth extends from 20m to 670m. That is, as shown in FIG. 5B, the display visual recognition area 202 for the driver whose eye position 100 </ b> H in the height direction is high is added to the upper end of the virtual image 200. Further, as shown in FIG. 5C, a display visual recognition area 203 for the driver whose eye position 100L in the height direction is low is added to the lower end of the virtual image 200.
 ただし、図5Bにおいて、虚像200の下端に追加された、眼の位置100Lが低い運転者用の表示視認領域203は、眼の位置100Hが高い運転者に対して不要な領域である。同様に、図5Cにおいて、虚像200の上端に追加された、眼の位置100Hが高い運転者用の表示視認領域202は、眼の位置100Lが低い運転者に対して不要な領域である。 However, in FIG. 5B, the display visual recognition area 203 for the driver having a low eye position 100L, which is added to the lower end of the virtual image 200, is an area unnecessary for a driver having a high eye position 100H. Similarly, in FIG. 5C, the display visual recognition area 202 for the driver with the high eye position 100H added to the upper end of the virtual image 200 is an area unnecessary for the driver with the low eye position 100L.
 上記の参考例では、眼の位置が高さ方向において±0.06m変わるだけで、虚像200,200aの高さ方向の大きさが0.28mから0.38mと1.3倍以上になる。この大きな虚像200aをウインドシールド300に投射するためには、HUD装置3の表示部31及び反射ミラー32を大型化する必要がある。しかしながら、HUD装置3を設置するダッシュボード等の車両側スペースには制約があるため、HUD装置3の大型化は好ましくない。 In the above reference example, the size of the virtual images 200, 200a in the height direction is increased from 0.28 m to 0.38 m, which is 1.3 times or more just by changing the position of the eye by ± 0.06 m in the height direction. In order to project this large virtual image 200a onto the windshield 300, it is necessary to increase the size of the display unit 31 and the reflection mirror 32 of the HUD device 3. However, since the vehicle-side space such as a dashboard in which the HUD device 3 is installed is limited, it is not preferable to increase the size of the HUD device 3.
 そこで、実施の形態1では、後述するように領域変更部22が運転者の眼100の位置に応じて重畳表示領域403を変更することにより、HUD装置3を大型化することなく眼の位置が異なる運転者が快適に虚像を視認できるようにする。 Therefore, in the first embodiment, as described later, the region changing unit 22 changes the superimposed display region 403 according to the position of the driver's eye 100, so that the position of the eye can be adjusted without increasing the size of the HUD device 3. Make it possible for different drivers to comfortably see the virtual image.
 図6Aは、実施の形態1において運転者の眼の位置に応じた重畳表示領域403の変更例を示す図であり、高さ方向における眼の位置が高い運転者の場合を示す。表示推奨領域402は、高さ方向における眼の位置によらず、奥行き0m~100mであるものとする。領域変更部22は、高い眼の位置100Hの運転者に対する表示視認領域401Hを、奥行き18m~56mと算出する。また、領域変更部22は、高い眼の位置100Hの運転者に対する重畳表示領域403Hを、奥行き18m~56mと算出し、非重畳表示領域404Hを、奥行き0m~18m及び56m~100mと算出する。 FIG. 6A is a diagram illustrating a modification example of the superimposed display region 403 according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a high eye position in the height direction. The recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the height direction. The area changing unit 22 calculates the display visual recognition area 401H for the driver at the high eye position 100H as a depth of 18 m to 56 m. The area changing unit 22 calculates the superimposed display area 403H for the driver at the high eye position 100H as the depth of 18 m to 56 m, and calculates the non-superimposed display area 404H as the depth of 0 m to 18 m and 56 m to 100 m.
 図6Bは、実施の形態1において、高さ方向における眼の位置が中程度の運転者が車両前景を視認した状態を示す図である。後述する画像生成部23は、中程度の眼の位置100Mの運転者に対する案内表示として、車両1が奥行き75m先の交差点を左折することを案内する画像情報をHUD装置3に表示させる場合、75m先の現実物体である交差点に対して虚像200の表示物201を重畳表示させる。 FIG. 6B is a diagram illustrating a state in which the driver with a medium eye position in the height direction visually recognizes the vehicle foreground in the first embodiment. The image generation unit 23 to be described later displays, on the HUD device 3, image information that guides the vehicle 1 turning left at an intersection 75 m in depth as guidance display for the driver at the middle eye position 100M. The display object 201 of the virtual image 200 is superimposed and displayed on the intersection which is the previous real object.
 図6Cは、実施の形態1において、高さ方向における眼の位置が高い運転者が車両前景を視認した状態を示す図である。後述する画像生成部23は、高い眼の位置100Hの運転者に対する案内表示として、車両1が奥行き75m先の交差点を左折することを案内する画像情報をHUD装置3に表示させる場合、75m先の現実物体である交差点が非重畳表示領域404Hに存在するため、表示物201を交差点に重畳表示させず、車両1の前景に重畳表示させる。その際の表示物201の表示位置は予め定められているものとする。図6Cの例では、表示物201は、虚像200の下部に表示される。そして、画像生成部23は、車両1から左折する交差点までの距離が56m以下になり交差点が重畳表示領域403Hに入ったときに、図6Bのように現実物体である交差点に対して虚像200の表示物201を重畳表示させる。 FIG. 6C is a diagram illustrating a state in which the driver with a high eye position in the height direction visually recognizes the vehicle foreground in the first embodiment. The image generation unit 23 to be described later displays, as guidance information for the driver at the high eye position 100H, when the HUD device 3 displays image information for guiding the vehicle 1 to turn left at an intersection 75 m ahead. Since the intersection which is a real object exists in the non-superimposed display area 404H, the display object 201 is superimposed on the foreground of the vehicle 1 without being superimposed on the intersection. It is assumed that the display position of the display object 201 at that time is predetermined. In the example of FIG. 6C, the display object 201 is displayed below the virtual image 200. Then, when the distance from the vehicle 1 to the intersection where the vehicle turns to the left is 56 m or less and the intersection enters the superimposed display area 403H, the image generation unit 23 displays the virtual image 200 on the intersection that is a real object as illustrated in FIG. 6B. The display object 201 is displayed in a superimposed manner.
 図7Aは、実施の形態1において運転者の眼の位置に応じた重畳表示領域403の変更例を示す図であり、高さ方向における眼の位置が低い運転者の場合を示す。表示推奨領域402は、高さ方向における眼の位置によらず、奥行き0m~100mであるものとする。領域変更部22は、低い眼の位置100Lの運転者に対する表示視認領域401Lを、奥行き23m~670mと算出する。また、領域変更部22は、低い眼の位置100Lの運転者に対する重畳表示領域403Lを、奥行き23m~100mと算出し、非重畳表示領域404Lを、奥行き0m~23mと算出する。 FIG. 7A is a diagram illustrating a modification example of the superimposed display region 403 according to the position of the driver's eyes in the first embodiment, and illustrates a case where the driver has a low eye position in the height direction. The recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the height direction. The area changing unit 22 calculates the display visual recognition area 401L for the driver at the low eye position 100L as the depth of 23 m to 670 m. Further, the region changing unit 22 calculates the superimposed display region 403L for the driver at the low eye position 100L as a depth of 23 m to 100 m, and calculates the non-superimposed display region 404L as a depth of 0 m to 23 m.
 図7Bは、実施の形態1において、高さ方向における眼の位置が中程度の運転者が車両前景を視認した状態を示す図である。後述する画像生成部23は、中程度の眼の位置100Mの運転者に対する案内表示として、車両1が奥行き20m先の交差点を左折することを案内する画像情報をHUD装置3に表示させる場合、20m先の現実物体である交差点に対して虚像200の表示物201を重畳表示させる。 FIG. 7B is a diagram illustrating a state in which the driver with a medium eye position in the height direction visually recognizes the vehicle foreground in the first embodiment. When the image generation unit 23 described later displays on the HUD device 3 image information that guides the vehicle 1 to turn left at an intersection 20 meters deep as a guidance display for the driver at the middle eye position 100M, The display object 201 of the virtual image 200 is superimposed and displayed on the intersection which is the previous real object.
 図7Cは、実施の形態1において、高さ方向における眼の位置が低い運転者が車両前景を視認した状態を示す図である。後述する画像生成部23は、低い眼の位置100Lの運転者に対する案内表示として、車両1が奥行き20m先の交差点を左折することを案内する画像情報をHUD装置3に表示させる場合、20m先の現実物体である交差点が非重畳表示領域404Lに存在するため、表示物201を交差点に重畳表示させず、車両1の前景に重畳表示させる。その際の表示物201の表示位置は予め定められているものとする。 FIG. 7C is a diagram illustrating a state in which the driver with a low eye position in the height direction visually recognizes the vehicle foreground in the first embodiment. When the image generation unit 23 described later displays on the HUD device 3 image information that guides the vehicle 1 to turn left at an intersection 20 meters ahead as a guidance display for the driver at the low eye position 100L, Since the intersection which is a real object exists in the non-superimposed display area 404L, the display object 201 is superimposed on the foreground of the vehicle 1 without being superimposed on the intersection. It is assumed that the display position of the display object 201 at that time is predetermined.
 なお、上記例では、交差点の右左折を案内する表示物201に対応する表示推奨領域402を奥行き0m~100mとしたが、表示推奨領域402はこの奥行き距離に限定されるものではなく、表示物201に応じて異なってよい。例えば、路面の白線を強調する表示物201に対応する表示推奨領域402は、奥行き30m~80mであるものとする。重畳表示領域403及び非重畳表示領域404は表示推奨領域402に応じて変更されるため、表示推奨領域402が表示物201に応じて変更される場合には重畳表示領域403及び非重畳表示領域404も応じて変更されることになる。 In the above example, the recommended display area 402 corresponding to the display object 201 that guides the left or right turn at the intersection is set to a depth of 0 m to 100 m. However, the recommended display area 402 is not limited to this depth distance. It may vary depending on 201. For example, it is assumed that the recommended display area 402 corresponding to the display object 201 that emphasizes the white line on the road surface has a depth of 30 m to 80 m. Since the superimposed display area 403 and the non-superimposed display area 404 are changed according to the display recommended area 402, when the recommended display area 402 is changed according to the display object 201, the superimposed display area 403 and the non-superimposed display area 404 are displayed. Will be changed accordingly.
 表示物201と少なくとも表示推奨領域402との対応関係が定義された情報が、HUD制御装置2のデータベース24に格納されている。なお、データベース24に、表示物201と表示推奨領域402との対応関係だけでなく、表示物201と表示視認領域401と重畳表示領域403と非重畳表示領域404との対応関係が定義された情報が格納されていてもよい。データベース24に格納されている情報の使用方法については後述する。このデータベース24は、HUD制御装置2に内蔵されている必要はなく、例えば、無線通信装置56を介して通信可能な車外サーバ装置(図示せず)上に構築されていてもよい。また、データベース24には、虚像200の位置、大きさ及び歪量等のHUD装置3に関する情報も格納されている。 Information in which the correspondence relationship between the display object 201 and at least the recommended display area 402 is defined is stored in the database 24 of the HUD control device 2. The database 24 defines not only the correspondence between the display object 201 and the recommended display area 402 but also the correspondence between the display object 201, the display visual recognition area 401, the superimposed display area 403, and the non-superimposed display area 404. May be stored. A method for using the information stored in the database 24 will be described later. The database 24 does not need to be built in the HUD control device 2, and may be constructed on an external server device (not shown) that can communicate via the wireless communication device 56, for example. The database 24 also stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200.
 画像生成部23は、後述する車載装置5の車内カメラ51及び車外カメラ52から撮像画像情報を取得する。また、例えば、画像生成部23は、GPS(Global Positioning System)受信機53から車両1の位置情報を取得する。また、例えば、画像生成部23は、レーダセンサ54から車両1の周辺に存在する物体の検出情報を取得する。また、例えば、画像生成部23は、ECU(Electronic Control Unit)55から車両1の走行速度等の各種の車両情報を取得する。また、例えば、画像生成部23は、無線通信装置56から各種情報を取得する。また、例えば、画像生成部23は、ナビゲーション装置57からナビゲーション情報及び道路の形状を示す情報を取得する。 The image generation unit 23 acquires captured image information from an in-vehicle camera 51 and an out-of-vehicle camera 52 of the in-vehicle device 5 described later. Further, for example, the image generation unit 23 acquires the position information of the vehicle 1 from a GPS (Global Positioning System) receiver 53. Further, for example, the image generation unit 23 acquires detection information of an object existing around the vehicle 1 from the radar sensor 54. Further, for example, the image generation unit 23 acquires various types of vehicle information such as the traveling speed of the vehicle 1 from an ECU (Electronic Control Unit) 55. For example, the image generation unit 23 acquires various types of information from the wireless communication device 56. For example, the image generation unit 23 acquires navigation information and information indicating the shape of the road from the navigation device 57.
 画像生成部23は、車載装置5から取得した各種の情報を用いて、データベース24に格納されている多数の表示物201の中からHUD装置3に表示させる表示物201を決定する。表示物201は、車両1の走行速度、車両1が走行している車線、車両1の走行経路、車両1の周辺に存在する他車両又は障害物等の位置、及び車両1の進行方向等を表す図形及び文字等である。そして、画像生成部23は、表示物201の表示態様を決定し、決定した表示態様の表示物201を含む画像情報を生成する。画像生成部23は、生成した画像情報をHUD装置3の表示部31へ出力する。この画像情報は、HUD装置3によりウインドシールド300に投射され、虚像200の表示物201として運転者に視認される。 The image generation unit 23 determines a display object 201 to be displayed on the HUD device 3 from among a large number of display objects 201 stored in the database 24 using various information acquired from the in-vehicle device 5. The display object 201 indicates the traveling speed of the vehicle 1, the lane in which the vehicle 1 is traveling, the traveling route of the vehicle 1, the position of other vehicles or obstacles existing around the vehicle 1, the traveling direction of the vehicle 1, and the like. Figures, characters, etc. to represent. And the image generation part 23 determines the display mode of the display thing 201, and produces | generates the image information containing the display thing 201 of the determined display mode. The image generation unit 23 outputs the generated image information to the display unit 31 of the HUD device 3. This image information is projected onto the windshield 300 by the HUD device 3 and visually recognized by the driver as a display object 201 of the virtual image 200.
 表示物201の表示態様は、虚像200における表示物201の形状、位置、大きさ、色、及び表示物201の現実物体への重畳表示か非重畳表示かの是非等である。例えば、画像生成部23は、車載装置5から取得した各種の情報を用いて、表示物201を重畳表示させる現実物体の位置を検出し、現実物体が重畳表示領域403に存在するか否かを判定する。画像生成部23は、重畳表示領域403に現実物体が存在すると判定した場合、表示物201が現実物体に重畳した状態に視認されるように表示態様を決定する。例えば、画像生成部23は、表示物201を左右方向にずらした両眼視差画像情報を生成してもよいし、車両1の前景における消失点に向かって表示物201が縮小されるように画像情報を変形させてもよい。さらに、画像生成部23は、交差点等の現実物体に表示物201が重畳した状態に視認されるように、表示物201の大きさ若しくは色等を現実物体に応じて変更する、又は表示物201に影を付ける等してもよい。一方、画像生成部23は、現実物体が重畳表示領域403に存在しない場合、即ち現実物体が非重畳表示領域404に存在する場合、虚像200の下部に表示物201が表示されるように、表示物201の形状、位置、及び大きさ等の表示態様を決定する。 The display mode of the display object 201 includes the shape, position, size, and color of the display object 201 in the virtual image 200 and whether or not the display object 201 is superimposed or non-superimposed on a real object. For example, the image generation unit 23 detects the position of the real object on which the display object 201 is superimposed and displayed using various types of information acquired from the in-vehicle device 5, and determines whether or not the real object exists in the superimposed display area 403. judge. When it is determined that a real object exists in the superimposed display area 403, the image generation unit 23 determines the display mode so that the display object 201 is visually recognized in a state of being superimposed on the real object. For example, the image generation unit 23 may generate binocular parallax image information in which the display object 201 is shifted in the left-right direction, or the image is displayed so that the display object 201 is reduced toward the vanishing point in the foreground of the vehicle 1. Information may be transformed. Further, the image generation unit 23 changes the size or color of the display object 201 according to the real object so that the display object 201 is visually recognized in a state of being superimposed on the real object such as an intersection, or the display object 201. A shadow may be added to the. On the other hand, the image generation unit 23 displays the display object 201 so that the display object 201 is displayed below the virtual image 200 when the real object does not exist in the superimposed display area 403, that is, when the real object exists in the non-superimposed display area 404. The display mode such as the shape, position, and size of the object 201 is determined.
 車載装置5は、車内カメラ51を備える。また、車載装置5は、車外カメラ52、GPS受信機53、レーダセンサ54、ECU55、無線通信装置56、又はナビゲーション装置57のうちの少なくとも1つを備える。 The in-vehicle device 5 includes an in-vehicle camera 51. The in-vehicle device 5 includes at least one of the outside camera 52, the GPS receiver 53, the radar sensor 54, the ECU 55, the wireless communication device 56, or the navigation device 57.
 車内カメラ51は、車両1の搭乗者を撮像するカメラであり、特に運転者を撮像する。車内カメラ51は、撮像した撮像画像情報を、眼位置検出部21へ出力する。 The in-vehicle camera 51 is a camera that images a passenger of the vehicle 1, and particularly images a driver. The in-vehicle camera 51 outputs captured image information to the eye position detection unit 21.
 車外カメラ52は、車両1の周辺を撮像するカメラである。例えば、車外カメラ52は、車両1が走行している車線、車両1の周辺に存在する他車両及び障害物等を撮像する。車外カメラ52は、撮像した撮像画像情報を、HUD制御装置2へ出力する。 The outside camera 52 is a camera that captures the periphery of the vehicle 1. For example, the outside camera 52 images a lane in which the vehicle 1 is traveling, other vehicles existing around the vehicle 1, obstacles, and the like. The vehicle exterior camera 52 outputs captured image information to the HUD control device 2.
 GPS受信機53は、図示しないGPS衛星からGPS信号を受信し、GPS信号が示す座標に対応する位置情報をHUD制御装置2へ出力する。 The GPS receiver 53 receives a GPS signal from a GPS satellite (not shown), and outputs position information corresponding to coordinates indicated by the GPS signal to the HUD control device 2.
 レーダセンサ54は、車両1の周辺に存在する物体の方向及び形状を検出し、さらに車両1とその物体との間の距離を検出する。このレーダセンサ54は、例えば、ミリ波帯の電波センサ、超音波センサ、又は光レーダセンサである。レーダセンサ54は、検出情報をHUD制御装置2へ出力する。 The radar sensor 54 detects the direction and shape of an object existing around the vehicle 1 and further detects the distance between the vehicle 1 and the object. The radar sensor 54 is, for example, a millimeter wave band radio wave sensor, an ultrasonic sensor, or an optical radar sensor. The radar sensor 54 outputs detection information to the HUD control device 2.
 ECU55は、車両1の各種動作を制御する制御ユニットである。ECU55は、CAN(Controller Area Network)規格に基づく通信方式でHUD制御装置2と通信し、車両1の各種動作の状態等を示す車両情報をHUD制御装置2へ出力する。車両情報は、車両1の走行速度及び操舵角度等である。 The ECU 55 is a control unit that controls various operations of the vehicle 1. The ECU 55 communicates with the HUD control device 2 by a communication method based on a CAN (Controller Area Network) standard, and outputs vehicle information indicating various operation states of the vehicle 1 to the HUD control device 2. The vehicle information includes the traveling speed and steering angle of the vehicle 1.
 無線通信装置56は、車外ネットワークに接続し、無線通信により各種情報を取得する通信装置である。この無線通信装置56は、例えば、車両1に搭載された受信機、又は車両1に持ち込まれたスマートフォン等の携帯通信端末である。車外ネットワークは、例えば、インターネットである。各種情報は、車両1の周辺の天候情報及び施設の情報等である。また、無線通信装置56は、車外ネットワークを通じて、車外サーバ装置から画像情報に対応した表示推奨領域402等の情報を取得してもよい。無線通信装置56は、各種情報をHUD制御装置2へ出力する。 The wireless communication device 56 is a communication device that is connected to an external network and acquires various types of information through wireless communication. The wireless communication device 56 is, for example, a mobile communication terminal such as a receiver mounted on the vehicle 1 or a smartphone brought into the vehicle 1. The network outside the vehicle is, for example, the Internet. Various types of information include weather information around the vehicle 1 and information on facilities. Further, the wireless communication device 56 may acquire information such as the recommended display area 402 corresponding to the image information from the external server device through the external network. The wireless communication device 56 outputs various information to the HUD control device 2.
 ナビゲーション装置57は、車両1の搭乗者により設定された目的地情報、図示しない記憶装置に記憶された地図情報、及びGPS受信機53から取得した位置情報に基づいて、車両1の走行経路を探索して案内する。なお、地図情報を記憶している記憶装置は、車両1上に構築されていてもよいし、無線通信装置56を介して通信可能な車外サーバ装置上に構築されていてもよい。ナビゲーション装置57は、走行経路上の交差点等の案内地点における車両1の進行方向、経由地又は目的地までの予想到着時刻、並びに、車両1の走行経路及び周辺道路の渋滞情報等を、ナビゲーション情報としてHUD制御装置2へ出力する。 The navigation device 57 searches for the travel route of the vehicle 1 based on the destination information set by the passenger of the vehicle 1, the map information stored in the storage device (not shown), and the position information acquired from the GPS receiver 53. I will guide you. The storage device that stores the map information may be built on the vehicle 1 or may be built on an out-of-vehicle server device that can communicate via the wireless communication device 56. The navigation device 57 provides navigation information such as the traveling direction of the vehicle 1 at a guidance point such as an intersection on the travel route, the expected arrival time to the waypoint or the destination, traffic information on the travel route of the vehicle 1 and surrounding roads, and the like. To the HUD control device 2.
 なお、ナビゲーション装置57は、車両1に搭載された情報機器であってもよいし、車両1に持ち込まれたPND(Portable Navigation Device)又はスマートフォン等の携帯通信端末であってもよい。 Note that the navigation device 57 may be an information device mounted in the vehicle 1 or a portable communication terminal such as a PND (Portable Navigation Device) or a smartphone brought into the vehicle 1.
 次に、HUD制御装置2の動作を説明する。
 図8は、実施の形態1に係るHUD制御装置2の動作例を示すフローチャートである。HUD制御装置2は、車両1のエンジンがオン状態である期間、又はHUDシステム4がオン状態である期間、図8のフローチャートに示される処理を繰り返し実行する。
Next, the operation of the HUD control device 2 will be described.
FIG. 8 is a flowchart showing an operation example of the HUD control device 2 according to the first embodiment. The HUD control device 2 repeatedly executes the processing shown in the flowchart of FIG. 8 during a period in which the engine of the vehicle 1 is on or a period in which the HUD system 4 is on.
 ステップST11において、眼位置検出部21は、車内カメラ51により撮像された撮像画像情報を用いて、運転者の眼の位置を検出する。また、眼位置検出部21は、検出した運転者の眼の位置が、高(H)、中程度(M)、低(L)のいずれの高さ位置であるかを判定する。 In step ST11, the eye position detection unit 21 detects the position of the driver's eyes using the captured image information captured by the in-vehicle camera 51. Further, the eye position detection unit 21 determines whether the detected driver's eye position is a height position of high (H), medium (M), or low (L).
 図9は、実施の形態1における眼位置検出部21による眼の位置の判定例を説明する図である。例えば、眼位置検出部21は、アイリプスに基づいて予め設定された第1閾値TH1(例えば、1.45m)及び第2閾値TH2(例えば、1.34m)を用いて、高さ方向における運転者の眼の位置100H,100M,100Lを判定する。アイリプスは、運転者の眼の位置の分布を統計的に表した楕円の呼称である。3つの楕円のうち、大きい楕円ほど、統計的により多くの運転者の眼の位置が含まれる。眼位置検出部21は、撮像画像情報から検出した高さ方向における運転者の眼の位置が第1閾値TH1以上である場合、高い眼の位置100Hと判定し、第1閾値TH1未満かつ第2閾値TH2以上である場合、中程度の眼の位置100Mと判定し、第2閾値TH2未満である場合、低い眼の位置100Lと判定する。なお、ここでは地面から運転者の眼100の位置までの高さを三段階に分けたが、何段階に分けてもよい。
 図9における第3閾値TH3及び第4閾値TH4は後述する。
FIG. 9 is a diagram for explaining an example of eye position determination by the eye position detection unit 21 according to the first embodiment. For example, the eye position detection unit 21 uses the first threshold value TH1 (for example, 1.45 m) and the second threshold value TH2 (for example, 1.34 m) that are set in advance based on the iris, and thus the driver in the height direction. The eye positions 100H, 100M, and 100L are determined. “Ilipus” is an ellipse name that statistically represents the distribution of the positions of the eyes of the driver. Of the three ellipses, the larger the ellipse, the more statistically the eye positions of the driver are included. When the driver's eye position in the height direction detected from the captured image information is greater than or equal to the first threshold value TH1, the eye position detection unit 21 determines that the eye position is 100H higher, and is less than the first threshold value TH1 and the second value. When the threshold value TH2 is equal to or higher than the threshold value TH2, it is determined that the eye position is about 100M, and when the threshold value is less than the second threshold value TH2, the eye position is determined to be 100L. Here, the height from the ground to the position of the driver's eye 100 is divided into three stages, but may be divided into any number of stages.
The third threshold value TH3 and the fourth threshold value TH4 in FIG. 9 will be described later.
 ステップST12において、画像生成部23は、車載装置5から取得した各種情報に基づいて、HUD装置3に表示させる画像情報の表示物201を決定する。表示物201は、例えば、走行経路を案内する矢印等のコンテンツ、白線を強調表示するコンテンツ、及び周辺車両を検出したことを表示するコンテンツ等である。表示物201は、これらのコンテンツに限定されるものではない。 In step ST12, the image generation unit 23 determines the display object 201 of image information to be displayed on the HUD device 3 based on various information acquired from the in-vehicle device 5. The display object 201 is, for example, content such as an arrow that guides a travel route, content that highlights a white line, content that indicates that a surrounding vehicle has been detected, and the like. The display object 201 is not limited to these contents.
 ステップST13において、領域変更部22は、虚像200の位置、大きさ及び歪量等のHUD装置3に関する情報、並びに、表示物201に対応する表示推奨領域402の情報等が記憶されているデータベース24から、虚像200の位置及び大きさの情報を取得する。領域変更部22は、ステップST11で眼位置検出部21が判定した運転者の高さ方向における眼の位置と、データベース24から取得した虚像200の位置及び大きさとを用いて、表示視認領域401を算出する。 In step ST13, the area changing unit 22 stores information on the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. From this, information on the position and size of the virtual image 200 is acquired. The area changing unit 22 uses the eye position in the height direction of the driver determined by the eye position detecting unit 21 in step ST11 and the position and size of the virtual image 200 acquired from the database 24 to display the display visual recognition area 401. calculate.
 なお、虚像200の位置及び大きさと高さ方向における眼の位置との組み合わせごとに、事前に算出された表示視認領域401が、データベース24に記憶されていてもよい。その場合、領域変更部22は、表示視認領域401を算出せずにデータベース24から取得する。 In addition, the display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the position of the eye in the height direction may be stored in the database 24. In that case, the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
 反射ミラー32及びウインドシールド300等の歪みに起因して、ウインドシールド300に投射される虚像200にも歪みが生じる場合がある。その場合、領域変更部22は、データベース24に記憶されている高さ方向における眼の位置と虚像200の歪量との対応関係を示す情報を用いて、画像情報中の表示物201の歪みを補正することにより、ウインドシールド300に投射される虚像200中の表示物201の歪みを抑制してもよい。 The virtual image 200 projected on the windshield 300 may be distorted due to the distortion of the reflection mirror 32 and the windshield 300. In this case, the region changing unit 22 uses the information indicating the correspondence between the eye position in the height direction and the amount of distortion of the virtual image 200 stored in the database 24 to calculate the distortion of the display object 201 in the image information. By correcting, the distortion of the display object 201 in the virtual image 200 projected onto the windshield 300 may be suppressed.
 ステップST14において、領域変更部22は、データベース24に記憶されている表示物201と表示推奨領域402との対応関係を示す情報を用いて、画像生成部23が決定した表示物201に応じた表示推奨領域402を特定する。 In step ST <b> 14, the area changing unit 22 uses the information indicating the correspondence between the display object 201 and the display recommended area 402 stored in the database 24 to display according to the display object 201 determined by the image generation unit 23. The recommended area 402 is specified.
 ステップST15において、領域変更部22は、ステップST13で算出した表示視認領域401と、ステップST14で特定した表示推奨領域402とを用いて、ステップST11で眼位置検出部21が判定した運転者の眼の位置に応じた重畳表示領域403及び非重畳表示領域404を算出する。 In step ST15, the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13 and the recommended display region 402 specified in step ST14, and the driver's eye determined by the eye position detection unit 21 in step ST11. The superimposed display area 403 and the non-superimposed display area 404 are calculated according to the position.
 なお、表示物201と高さ方向における眼の位置と重畳表示領域403と非重畳表示領域404との対応関係が定義された情報が、データベース24に記憶されていてもよい。その場合、領域変更部22は、重畳表示領域403及び非重畳表示領域404を算出せずにデータベース24から取得する。 Note that information in which the correspondence relationship between the display object 201, the eye position in the height direction, the superimposed display area 403, and the non-superimposed display area 404 is defined may be stored in the database 24. In that case, the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
 ステップST16において、画像生成部23は、ステップST15で領域変更部22が算出した重畳表示領域403又は非重畳表示領域404を用いて、ステップST12で決定した表示物201の表示態様を決定する。そして、画像生成部23は、決定した表示態様の表示物201を含む画像情報を生成し、HUD装置3の表示部31へ出力する。 In step ST16, the image generation unit 23 determines the display mode of the display object 201 determined in step ST12 using the superimposed display region 403 or the non-superimposed display region 404 calculated by the region changing unit 22 in step ST15. Then, the image generation unit 23 generates image information including the display object 201 having the determined display mode, and outputs the image information to the display unit 31 of the HUD device 3.
 以上のように、実施の形態1に係るHUD制御装置2は、眼位置検出部21、画像生成部23、及び領域変更部22を備える。眼位置検出部21は、運転者の眼100の高さ方向の位置を検出する。画像生成部23は、HUD装置3の表示部31に表示させる画像情報を生成する。領域変更部22は、眼位置検出部21により検出された運転者の眼100の高さ方向の位置に応じて、画像生成部23により生成された画像情報を車両1の前景における現実物体に虚像200として重畳表示する重畳表示領域403を変更する。この構成により、HUD装置3を大型化させずに、運転者の眼100の高さ方向の位置に関わらず、運転者に虚像200を現実物体に重畳した状態に視認させることができる。 As described above, the HUD control device 2 according to Embodiment 1 includes the eye position detection unit 21, the image generation unit 23, and the region change unit 22. The eye position detection unit 21 detects the position of the driver's eye 100 in the height direction. The image generation unit 23 generates image information to be displayed on the display unit 31 of the HUD device 3. The area changing unit 22 converts the image information generated by the image generating unit 23 into a virtual object in the foreground of the vehicle 1 according to the height direction position of the driver's eye 100 detected by the eye position detecting unit 21. The superimposed display area 403 to be superimposed and displayed as 200 is changed. With this configuration, it is possible to make the driver visually recognize the virtual image 200 superimposed on a real object regardless of the position in the height direction of the driver's eye 100 without increasing the size of the HUD device 3.
 また、実施の形態1の領域変更部22は、運転者の眼100の高さ方向の位置、及び画像生成部23により生成された画像情報に応じて、重畳表示領域403を変更する。領域変更部22は、画像情報に含まれる表示物201ごとに重畳表示領域403を変更することにより、より正確に重畳表示領域403を変更できる。 Further, the region changing unit 22 according to the first embodiment changes the superimposed display region 403 according to the height direction position of the driver's eye 100 and the image information generated by the image generating unit 23. The area changing unit 22 can change the superimposed display area 403 more accurately by changing the superimposed display area 403 for each display object 201 included in the image information.
実施の形態2.
 実施の形態2に係るHUDシステム4の構成は、実施の形態1の図1に示された構成と図面上は同一であるため、以下では図1を援用する。
Embodiment 2. FIG.
Since the configuration of the HUD system 4 according to the second embodiment is the same as that shown in FIG. 1 of the first embodiment in the drawing, FIG. 1 is used below.
 実施の形態2の眼位置検出部21は、運転者の眼100の高さ方向の位置に加えて、運転者の眼100の奥行き方向の位置を検出する。
 例えば、眼位置検出部21は、図9に示されたアイリプスに基づいて予め設定された第3閾値TH3及び第4閾値TH4を用いて、奥行き方向における運転者の眼の位置100F,100C,100Bを判定する。この例では、第3閾値TH3及び第4閾値TH4により、運転者の眼の奥行き方向の位置を、前方の眼の位置100F、中央の眼の位置100C、後方の眼の位置100Bの三段階に分けたが、何段階に分けてもよい。
The eye position detection unit 21 according to the second embodiment detects the position of the driver's eye 100 in the depth direction in addition to the position of the driver's eye 100 in the height direction.
For example, the eye position detection unit 21 uses the third threshold value TH3 and the fourth threshold value TH4 that are set in advance based on the iris shown in FIG. 9, and the positions 100F, 100C, and 100B of the driver's eyes in the depth direction. Determine. In this example, the position of the driver's eyes in the depth direction is divided into three stages of a front eye position 100F, a center eye position 100C, and a rear eye position 100B by the third threshold value TH3 and the fourth threshold value TH4. Although divided, it can be divided into any number of stages.
 以下では、奥行き方向における運転者の眼の位置を、前方と後方の二段階に分けた例を示す。 In the following, an example in which the position of the driver's eyes in the depth direction is divided into two stages, front and rear.
 図10は、実施の形態2において運転者の眼の位置に応じた重畳表示領域403の変更例を示す図であり、奥行き方向における眼の位置が後方の運転者の場合を示す。表示推奨領域402は、奥行き方向における眼の位置によらず、奥行き0m~100mであるものとする。図9では、前方の眼の位置100F及び後方の眼の位置100Bともに、地面から1.4mの高さに位置し、前方の眼の位置100Fから虚像200までの奥行き距離が5m、後方の眼の位置100Bから虚像200までの奥行き距離が5.5mである。運転者の眼が同じ高さに位置していても、前方の眼の位置100Fの運転者に対する表示視認領域401Fと、後方の眼の位置100Bの運転者に対する表示視認領域401Bとは異なる。図10の例において、表示視認領域401Fは、奥行き20m~100mであり、表示視認領域401Bは、奥行き22m~110mである。表示視認領域401F,401Bは、三角関数に基づいて算出される。 FIG. 10 is a diagram illustrating a modification example of the superimposed display area 403 according to the position of the driver's eyes in the second embodiment, and illustrates a case where the position of the eyes in the depth direction is the driver behind. The recommended display area 402 has a depth of 0 m to 100 m regardless of the position of the eye in the depth direction. In FIG. 9, both the front eye position 100F and the rear eye position 100B are located at a height of 1.4 m from the ground, the depth distance from the front eye position 100F to the virtual image 200 is 5 m, and the rear eye The depth distance from the position 100B to the virtual image 200 is 5.5 m. Even if the driver's eyes are positioned at the same height, the display visual recognition area 401F for the driver at the front eye position 100F and the display visual recognition area 401B for the driver at the rear eye position 100B are different. In the example of FIG. 10, the display visual recognition area 401F has a depth of 20 m to 100 m, and the display visual recognition area 401B has a depth of 22 m to 110 m. The display visual recognition areas 401F and 401B are calculated based on trigonometric functions.
 このように、奥行き方向における運転者の眼の位置に応じて表示視認領域401が異なるため、奥行き方向における運転者の眼の位置に応じて重畳表示領域403及び非重畳表示領域404も異なる。図10の例では、前方の眼の位置100Fの運転者に対する重畳表示領域403Fは、奥行き20m~100mであり、非重畳表示領域404Fは、奥行き0m~20mである。後方の眼の位置100Bの運転者に対する重畳表示領域403Bは、奥行き22m~100mであり、非重畳表示領域404Bは0m~22mである。 Thus, since the display visual recognition area 401 is different depending on the position of the driver's eyes in the depth direction, the superimposed display area 403 and the non-superimposed display area 404 are also different depending on the position of the driver's eyes in the depth direction. In the example of FIG. 10, the superimposed display area 403F for the driver at the front eye position 100F has a depth of 20m to 100m, and the non-superimposed display area 404F has a depth of 0m to 20m. The superimposed display area 403B for the driver at the rear eye position 100B has a depth of 22m to 100m, and the non-superimposed display area 404B has a depth of 0m to 22m.
 以上のように、実施の形態2の眼位置検出部21は、運転者の眼100の奥行き方向の位置を検出する。領域変更部22は、運転者の眼100の奥行き方向の位置に応じて重畳表示領域403を変更する。運転者が虚像200を視認するときに奥行き方向における眼の位置に応じて見下ろし角度が異なるため、領域変更部22は、高さ方向に加えて奥行き方向における運転者の眼100の位置に応じて重畳表示領域403を変更する。これにより、領域変更部22は、より正確に重畳表示領域403を変更できる。 As described above, the eye position detection unit 21 according to the second embodiment detects the position of the driver's eye 100 in the depth direction. The area changing unit 22 changes the superimposed display area 403 according to the position of the driver's eye 100 in the depth direction. When the driver visually recognizes the virtual image 200, the look-down angle varies depending on the position of the eyes in the depth direction. Therefore, the region changing unit 22 depends on the position of the driver's eyes 100 in the depth direction in addition to the height direction. The superimposed display area 403 is changed. Thereby, the area changing unit 22 can change the superimposed display area 403 more accurately.
 また、実施の形態1の領域変更部22は、運転者の眼100の奥行き方向の位置、及び画像生成部23により生成された画像情報に応じて、重畳表示領域403を変更する。領域変更部22は、画像情報に含まれる表示物201ごとに重畳表示領域403を変更することにより、より正確に重畳表示領域403を変更できる。 Further, the region changing unit 22 according to the first embodiment changes the superimposed display region 403 according to the position in the depth direction of the driver's eye 100 and the image information generated by the image generating unit 23. The area changing unit 22 can change the superimposed display area 403 more accurately by changing the superimposed display area 403 for each display object 201 included in the image information.
実施の形態3.
 図11は、実施の形態3に係るHUDシステム4の要部を示すブロック図である。図12は、実施の形態3に係るHUDシステム4の車両搭載時の構成図である。実施の形態3に係るHUDシステム4は、図1に示された実施の形態1のHUDシステム4に対して、角度情報取得部25及び反射ミラー調整部33が追加された構成である。図11及び図12において図1及び図2と同一又は相当する部分は、同一の符号を付し説明を省略する。
Embodiment 3 FIG.
FIG. 11 is a block diagram showing a main part of the HUD system 4 according to the third embodiment. FIG. 12 is a configuration diagram of the HUD system 4 according to the third embodiment when mounted on a vehicle. The HUD system 4 according to the third embodiment has a configuration in which an angle information acquisition unit 25 and a reflection mirror adjustment unit 33 are added to the HUD system 4 according to the first embodiment shown in FIG. In FIG. 11 and FIG. 12, the same or corresponding parts as those in FIG. 1 and FIG.
 HUD装置3は、反射ミラー32のチルト角度を調整する反射ミラー調整部33を備える。この反射ミラー調整部33は、アクチュエータ等である。反射ミラー調整部33は、運転者の指示等に従い、反射ミラー32のチルト角度を調整する。反射ミラー調整部33は、反射ミラー32の調整後のチルト角度を含む角度情報を出力する。 The HUD device 3 includes a reflection mirror adjustment unit 33 that adjusts the tilt angle of the reflection mirror 32. The reflection mirror adjustment unit 33 is an actuator or the like. The reflection mirror adjustment unit 33 adjusts the tilt angle of the reflection mirror 32 in accordance with a driver's instruction or the like. The reflection mirror adjustment unit 33 outputs angle information including the adjusted tilt angle of the reflection mirror 32.
 HUD制御装置2は、角度情報取得部25を備える。角度情報取得部25は、反射ミラー32のチルト角度を含む角度情報を、反射ミラー調整部33から取得し、領域変更部22へ出力する。 The HUD control device 2 includes an angle information acquisition unit 25. The angle information acquisition unit 25 acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33 and outputs the angle information to the region change unit 22.
 図13は、実施の形態3において、反射ミラー32のチルト角度と虚像200の位置との対応関係を示す図である。反射ミラー調整部33が、表示部31から出射した光線の反射ミラー32における反射角度を変更することで、虚像200の高さ方向及び奥行き方向の位置が変更される。これにより、表示視認領域401が変更されることになり、反射ミラー32を小型化できる。また、虚像200の高さ方向及び奥行き方向の位置が変更されることにより、表示視認領域401が変更されるため、領域変更部22は、反射ミラー32のチルト角度に応じた虚像200の高さ方向及び奥行き方向の位置を考慮して、重畳表示領域403及び非重畳表示領域404を算出する。 FIG. 13 is a diagram illustrating a correspondence relationship between the tilt angle of the reflection mirror 32 and the position of the virtual image 200 in the third embodiment. The reflection mirror adjustment unit 33 changes the reflection angle of the light beam emitted from the display unit 31 on the reflection mirror 32, thereby changing the position of the virtual image 200 in the height direction and the depth direction. Thereby, the display visual recognition area | region 401 will be changed and the reflective mirror 32 can be reduced in size. In addition, since the display visual recognition area 401 is changed by changing the position of the virtual image 200 in the height direction and the depth direction, the area changing unit 22 sets the height of the virtual image 200 according to the tilt angle of the reflection mirror 32. The superimposed display area 403 and the non-superimposed display area 404 are calculated in consideration of the position in the direction and the depth direction.
 実施の形態1と同様、データベース24には、虚像200の位置、大きさ及び歪量等のHUD装置3に関する情報、並びに、表示物201に対応する表示推奨領域402の情報等が記憶されている。さらに、実施の形態2では、上記虚像200の位置として、反射ミラー32のチルト角度と虚像200の位置、大きさ及び歪量等との対応関係を示す情報が記憶されている。 As in the first embodiment, the database 24 stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. . Further, in the second embodiment, as the position of the virtual image 200, information indicating a correspondence relationship between the tilt angle of the reflection mirror 32 and the position, size, distortion amount, and the like of the virtual image 200 is stored.
 図14は、実施の形態3に係るHUD制御装置2の動作例を示すフローチャートである。図14のステップST11,ST12,ST14及びST16は、図8のステップST11,ST12,ST14及びST16と同じ動作である。 FIG. 14 is a flowchart showing an operation example of the HUD control device 2 according to the third embodiment. Steps ST11, ST12, ST14 and ST16 in FIG. 14 are the same operations as steps ST11, ST12, ST14 and ST16 in FIG.
 ステップST31において、角度情報取得部25は、反射ミラー32のチルト角度を含む角度情報を、反射ミラー調整部33から取得する。領域変更部22は、角度情報取得部25が取得した反射ミラー32のチルト角度に対応する虚像200の位置及び大きさの情報をデータベース24から取得し、虚像200の位置及び大きさを特定する。 In step ST31, the angle information acquisition unit 25 acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33. The region changing unit 22 acquires information on the position and size of the virtual image 200 corresponding to the tilt angle of the reflection mirror 32 acquired by the angle information acquisition unit 25 from the database 24, and specifies the position and size of the virtual image 200.
 ステップST13aにおいて、領域変更部22は、ステップST11で眼位置検出部21が判定した運転者の眼の高さ方向又は奥行き方向の少なくとも一方の位置と、ステップST31で特定した虚像200の位置及び大きさとを用いて、表示視認領域401を算出する。 In step ST13a, the area changing unit 22 determines at least one position in the height direction or depth direction of the driver's eyes determined by the eye position detecting unit 21 in step ST11, and the position and size of the virtual image 200 specified in step ST31. Then, the display visual recognition area 401 is calculated.
 なお、虚像200の位置及び大きさと、高さ方向又は奥行き方向の少なくとも一方における眼の位置との組み合わせごとに、事前に算出された表示視認領域401が、データベース24に記憶されていてもよい。その場合、領域変更部22は、表示視認領域401を算出せずにデータベース24から取得する。 Note that a display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the eye position in at least one of the height direction and the depth direction may be stored in the database 24. In that case, the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
 ステップST15aにおいて、領域変更部22は、ステップST13aで算出した表示視認領域401と、ステップST14で特定した表示推奨領域402とを用いて、ステップST11で眼位置検出部21が判定した高さ方向又は奥行き方向の少なくとも一方の眼の位置に応じた重畳表示領域403及び非重畳表示領域404を算出する。 In step ST15a, the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13a and the recommended display region 402 specified in step ST14, or the height direction determined by the eye position detection unit 21 in step ST11 or A superimposed display area 403 and a non-superimposed display area 404 corresponding to the position of at least one eye in the depth direction are calculated.
 なお、表示物201と高さ方向又は奥行き方向の少なくとも一方における眼の位置と重畳表示領域403と非重畳表示領域404との対応関係が定義された情報が、データベース24に記憶されていてもよい。その場合、領域変更部22は、重畳表示領域403及び非重畳表示領域404を算出せずにデータベース24から取得する。 Information defining the correspondence relationship between the display object 201 and the eye position in at least one of the height direction and the depth direction and the superimposed display area 403 and the non-superimposed display area 404 may be stored in the database 24. . In that case, the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
 以上のように、実施の形態3に係るHUD装置3は、反射ミラー32のチルト角度を調整する反射ミラー調整部33を備える。HUD制御装置2は、反射ミラー32のチルト角度を含む角度情報を反射ミラー調整部33から取得する角度情報取得部25を備える。領域変更部22は、運転者の眼100の位置、及び角度情報取得部25により取得された反射ミラー32のチルト角度に応じて、重畳表示領域403を変更する。これにより、領域変更部22は、反射ミラー32のチルト角度変更に伴う表示視認領域401の変更に対応して重畳表示領域403を変更できる。よって、領域変更部22は、より正確に重畳表示領域403を変更できる。 As described above, the HUD device 3 according to the third embodiment includes the reflection mirror adjustment unit 33 that adjusts the tilt angle of the reflection mirror 32. The HUD control device 2 includes an angle information acquisition unit 25 that acquires angle information including the tilt angle of the reflection mirror 32 from the reflection mirror adjustment unit 33. The region changing unit 22 changes the superimposed display region 403 according to the position of the driver's eye 100 and the tilt angle of the reflecting mirror 32 acquired by the angle information acquiring unit 25. Thereby, the area changing unit 22 can change the superimposed display area 403 in response to the change in the display visual recognition area 401 accompanying the change in the tilt angle of the reflection mirror 32. Therefore, the area changing unit 22 can change the superimposed display area 403 more accurately.
実施の形態4.
 図15は、実施の形態4に係るHUDシステム4の要部を示すブロック図である。図16は、実施の形態4に係るHUDシステム4の車両搭載時の構成図である。実施の形態4に係るHUDシステム4は、図1に示された実施の形態1のHUDシステム4に対して位置情報取得部26及びHUD装置位置調整部34が追加された構成である。図15及び図16において図1及び図2と同一又は相当する部分は、同一の符号を付し説明を省略する。
Embodiment 4 FIG.
FIG. 15 is a block diagram illustrating a main part of the HUD system 4 according to the fourth embodiment. FIG. 16 is a configuration diagram when the HUD system 4 according to the fourth embodiment is mounted on a vehicle. The HUD system 4 according to the fourth embodiment has a configuration in which a position information acquisition unit 26 and a HUD device position adjustment unit 34 are added to the HUD system 4 according to the first embodiment shown in FIG. In FIG. 15 and FIG. 16, the same or corresponding parts as those in FIG. 1 and FIG.
 HUD装置3は、HUD装置3又はHUD装置3の一部について、チルト角度又は奥行き位置を調整するHUD装置位置調整部34を備える。このHUD装置位置調整部34は、アクチュエータ等である。HUD装置位置調整部34は、運転者の指示等に従い、表示部31のチルト角度若しくは奥行き位置、反射ミラー32のチルト角度若しくは奥行き位置、又は、表示部31及び反射ミラー32を内蔵したHUD装置3の筐体のチルト角度若しくは奥行き位置のうちの少なくとも1つを調整する。HUD装置位置調整部34は、HUD装置3又はHUD装置3の一部の調整後のチルト角度又は奥行き位置を含む位置情報を出力する。 The HUD device 3 includes a HUD device position adjustment unit 34 that adjusts the tilt angle or the depth position of the HUD device 3 or a part of the HUD device 3. The HUD device position adjustment unit 34 is an actuator or the like. The HUD device position adjustment unit 34 is a tilt angle or depth position of the display unit 31, a tilt angle or depth position of the reflection mirror 32, or a HUD device 3 incorporating the display unit 31 and the reflection mirror 32 in accordance with a driver's instruction or the like. At least one of the tilt angle or the depth position of the casing is adjusted. The HUD device position adjustment unit 34 outputs position information including the HUD device 3 or the adjusted tilt angle or depth position of a part of the HUD device 3.
 HUD制御装置2は、位置情報取得部26を備える。位置情報取得部26は、HUD装置3又はHUD装置3の一部のチルト角度又は奥行き位置を含む位置情報を、HUD装置位置調整部34から取得し、領域変更部22へ出力する。 The HUD control device 2 includes a position information acquisition unit 26. The position information acquisition unit 26 acquires position information including the tilt angle or depth position of the HUD device 3 or a part of the HUD device 3 from the HUD device position adjustment unit 34 and outputs the position information to the region change unit 22.
 図17は、実施の形態4において、HUD装置3の奥行き位置と虚像200の位置との対応関係を示す図である。HUD装置位置調整部34が、HUD装置3の筐体35の奥行き位置を変更することで、虚像200の高さ方向及び奥行き方向の位置が変更される。これにより、表示視認領域401が変更されるため、領域変更部22は、HUD装置3の筐体35の奥行き位置を考慮して、重畳表示領域403及び非重畳表示領域404を算出する。 FIG. 17 is a diagram illustrating a correspondence relationship between the depth position of the HUD device 3 and the position of the virtual image 200 in the fourth embodiment. The position of the virtual image 200 in the height direction and the depth direction is changed by the HUD device position adjusting unit 34 changing the depth position of the housing 35 of the HUD device 3. Thereby, since the display visual recognition area 401 is changed, the area changing unit 22 calculates the superimposed display area 403 and the non-superimposed display area 404 in consideration of the depth position of the housing 35 of the HUD device 3.
 HUD装置3の筐体35の奥行き位置だけでなく、HUD装置3の筐体35のチルト角度、表示部31のチルト角度若しくは奥行き位置、又は、反射ミラー32のチルト角度若しくは奥行き位置が変更されることによっても、表示視認領域401が変更される。そのため、領域変更部22は、これらの奥行き位置及びチルト角度を考慮して、重畳表示領域403及び非重畳表示領域404を算出する。 Not only the depth position of the housing 35 of the HUD device 3, but also the tilt angle of the housing 35 of the HUD device 3, the tilt angle or depth position of the display unit 31, or the tilt angle or depth position of the reflection mirror 32 is changed. Also, the display visual recognition area 401 is changed. Therefore, the region changing unit 22 calculates the superimposed display region 403 and the non-superimposed display region 404 in consideration of these depth positions and tilt angles.
 実施の形態1と同様、データベース24には、虚像200の位置、大きさ及び歪量等のHUD装置3に関する情報、並びに、表示物201に対応する表示推奨領域402の情報等が記憶されている。さらに、実施の形態4では、上記虚像200の位置として、表示部31のチルト角度若しくは奥行き位置、反射ミラー32のチルト角度若しくは奥行き位置、又は、HUD装置3全体のチルト角度若しくは奥行き位置のうちの少なくとも1つと、虚像200の位置、大きさ及び歪量等との対応関係を示す情報が記憶されている。 As in the first embodiment, the database 24 stores information related to the HUD device 3 such as the position, size, and distortion amount of the virtual image 200, information on the recommended display area 402 corresponding to the display object 201, and the like. . Furthermore, in the fourth embodiment, as the position of the virtual image 200, the tilt angle or depth position of the display unit 31, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3 is selected. Information indicating a correspondence relationship between at least one and the position, size, amount of distortion, and the like of the virtual image 200 is stored.
 図18は、実施の形態4に係るHUD制御装置2の動作例を示すフローチャートである。図18のステップ11,ST12,ST14及びST16は、図8のステップST11,ST12,ST14及びST16と同じ動作である。 FIG. 18 is a flowchart showing an operation example of the HUD control device 2 according to the fourth embodiment. Steps 11, ST12, ST14 and ST16 in FIG. 18 are the same operations as steps ST11, ST12, ST14 and ST16 in FIG.
 ステップST41において、位置情報取得部26は、表示部31のチルト角度若しくは奥行き位置、反射ミラー32のチルト角度若しくは奥行き位置、又は、HUD装置3全体のチルト角度若しくは奥行き位置のうちの少なくとも1つを含む位置情報を、HUD装置位置調整部34から取得する。領域変更部22は、位置情報取得部26が取得した表示部31のチルト角度若しくは奥行き位置、反射ミラー32のチルト角度若しくは奥行き位置、又は、HUD装置3全体のチルト角度若しくは奥行き位置のうちの少なくとも1つに対応する虚像200の位置及び大きさの情報をデータベース24から取得し、虚像200の位置及び大きさを特定する。 In step ST41, the position information acquisition unit 26 obtains at least one of the tilt angle or depth position of the display unit 31, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3. The included position information is acquired from the HUD device position adjustment unit 34. The region changing unit 22 is at least one of the tilt angle or depth position of the display unit 31 acquired by the position information acquisition unit 26, the tilt angle or depth position of the reflection mirror 32, or the tilt angle or depth position of the entire HUD device 3. Information on the position and size of the virtual image 200 corresponding to one is acquired from the database 24, and the position and size of the virtual image 200 are specified.
 ステップST13bにおいて、領域変更部22は、ステップST11で眼位置検出部21が判定した運転者の眼の高さ方向又は奥行き方向の少なくとも一方の位置と、ステップST41で特定した虚像200の位置及び大きさとを用いて、表示視認領域401を算出する。 In step ST13b, the area changing unit 22 determines the position and size of the virtual image 200 identified in step ST41, and at least one position in the height direction or depth direction of the driver's eyes determined by the eye position detection unit 21 in step ST11. Then, the display visual recognition area 401 is calculated.
 なお、虚像200の位置及び大きさと、高さ方向又は奥行き方向の少なくとも一方における眼の位置との組み合わせごとに、事前に算出された表示視認領域401が、データベース24に記憶されていてもよい。その場合、領域変更部22は、表示視認領域401を算出せずにデータベース24から取得する。 Note that a display visual recognition area 401 calculated in advance for each combination of the position and size of the virtual image 200 and the eye position in at least one of the height direction and the depth direction may be stored in the database 24. In that case, the area changing unit 22 acquires the display visual recognition area 401 from the database 24 without calculating it.
 ステップST15bにおいて、領域変更部22は、ステップST13bで算出した表示視認領域401と、ステップST14で特定した表示推奨領域402とを用いて、ステップST11で眼位置検出部21が判定した高さ方向又は奥行き方向の少なくとも一方の眼の位置に応じた重畳表示領域403及び非重畳表示領域404を算出する。 In step ST15b, the region changing unit 22 uses the display visual recognition region 401 calculated in step ST13b and the recommended display region 402 specified in step ST14, or the height direction determined by the eye position detection unit 21 in step ST11 or A superimposed display area 403 and a non-superimposed display area 404 corresponding to the position of at least one eye in the depth direction are calculated.
 なお、表示物201と高さ方向又は奥行き方向の少なくとも一方における眼の位置と重畳表示領域403と非重畳表示領域404との対応関係が定義された情報が、データベース24に記憶されていてもよい。その場合、領域変更部22は、重畳表示領域403及び非重畳表示領域404を算出せずにデータベース24から取得する。 Information defining the correspondence relationship between the display object 201 and the eye position in at least one of the height direction and the depth direction and the superimposed display area 403 and the non-superimposed display area 404 may be stored in the database 24. . In that case, the area changing unit 22 acquires the superimposed display area 403 and the non-superimposed display area 404 from the database 24 without calculating them.
 以上のように、実施の形態4に係るHUD装置3は、HUD装置3全体又はHUD装置3の一部の奥行き位置又はチルト角度の少なくとも一方を調整するHUD装置位置調整部34を備える。HUD制御装置2は、HUD装置3全体又はHUD装置3の一部の奥行き位置又はチルト角度の少なくとも一方を含む位置情報を、HUD装置位置調整部34から取得する位置情報取得部26を備える。領域変更部22は、運転者の眼100の位置、及び位置情報取得部26により取得されたHUD装置3全体又はHUD装置3の一部の奥行き位置又はチルト角度の少なくとも一方に応じて、重畳表示領域403を変更する。これにより、領域変更部22は、HUD装置3又はHUD装置3の一部の位置変更又は角度変更に伴う表示視認領域401の変更に対応して重畳表示領域403を変更できる。よって、領域変更部22は、より正確に重畳表示領域403を変更できる。 As described above, the HUD device 3 according to the fourth embodiment includes the HUD device position adjustment unit 34 that adjusts at least one of the depth position or the tilt angle of the entire HUD device 3 or a part of the HUD device 3. The HUD control device 2 includes a position information acquisition unit 26 that acquires position information including at least one of the depth position and the tilt angle of the entire HUD device 3 or a part of the HUD device 3 from the HUD device position adjustment unit 34. The region changing unit 22 displays a superimposed image according to at least one of the position of the driver's eye 100 and the depth position or tilt angle of the entire HUD device 3 or a part of the HUD device 3 acquired by the position information acquisition unit 26. The area 403 is changed. Thereby, the area changing unit 22 can change the superimposed display area 403 in response to the change of the display visual recognition area 401 accompanying the position change or the angle change of the HUD device 3 or a part of the HUD device 3. Therefore, the area changing unit 22 can change the superimposed display area 403 more accurately.
 なお、各実施の形態に係るHUD制御装置2は、HUD装置3を制御する構成であったが、HMD(Head-Mounted Display)装置を制御する構成であってもよい。即ち、HUD制御装置2の制御対象は、HUD及びHMDのような、立体像を表示可能な表示装置であればよい。 The HUD control device 2 according to each embodiment is configured to control the HUD device 3, but may be configured to control an HMD (Head-Mounted Display) device. That is, the control target of the HUD control device 2 may be a display device that can display a stereoscopic image, such as HUD and HMD.
 最後に、各実施の形態に係るHUD制御装置2のハードウェア構成を説明する。
 図19A及び図19Bは、各実施の形態に係るHUD制御装置2のハードウェア構成例を示す図である。HUD制御装置2におけるデータベース24は、メモリ1001である。HUD制御装置2における眼位置検出部21、領域変更部22、画像生成部23、角度情報取得部25、及び位置情報取得部26の機能は、処理回路により実現される。即ち、HUD制御装置2は、上記機能を実現するための処理回路を備える。処理回路は、専用のハードウェアとしての処理回路1000であってもよいし、メモリ1001に格納されるプログラムを実行するプロセッサ1002であってもよい。
Finally, the hardware configuration of the HUD control device 2 according to each embodiment will be described.
19A and 19B are diagrams illustrating a hardware configuration example of the HUD control device 2 according to each embodiment. The database 24 in the HUD control device 2 is a memory 1001. The functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 in the HUD control device 2 are realized by a processing circuit. That is, the HUD control device 2 includes a processing circuit for realizing the above functions. The processing circuit may be the processing circuit 1000 as dedicated hardware, or may be the processor 1002 that executes a program stored in the memory 1001.
 図19Aに示されるように、処理回路が専用のハードウェアである場合、処理回路1000は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field Programmable Gate Array)、又はこれらを組み合わせたものが該当する。眼位置検出部21、領域変更部22、画像生成部23、角度情報取得部25、及び位置情報取得部26の機能を複数の処理回路1000で実現してもよいし、各部の機能をまとめて1つの処理回路1000で実現してもよい。 As shown in FIG. 19A, when the processing circuit is dedicated hardware, the processing circuit 1000 includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit). ), FPGA (Field Programmable Gate Array), or a combination thereof. The functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 may be realized by a plurality of processing circuits 1000. A single processing circuit 1000 may be used.
 図19Aに示されるように、処理回路がプロセッサ1002である場合、眼位置検出部21、領域変更部22、画像生成部23、角度情報取得部25、及び位置情報取得部26の機能は、ソフトウェア、ファームウェア、又はソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェア又はファームウェアはプログラムとして記述され、メモリ1001に格納される。プロセッサ1002は、メモリ1001に格納されたプログラムを読みだして実行することにより、各部の機能を実現する。即ち、HUD制御装置2は、プロセッサ1002により実行されるときに、図8等のフローチャートで示されるステップが結果的に実行されることになるプログラムを格納するためのメモリ1001を備える。また、このプログラムは、眼位置検出部21、領域変更部22、画像生成部23、角度情報取得部25、及び位置情報取得部26の手順又は方法をコンピュータに実行させるものであるとも言える。 As shown in FIG. 19A, when the processing circuit is a processor 1002, the functions of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26 are software. , Firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 1001. The processor 1002 reads out and executes the program stored in the memory 1001, thereby realizing the function of each unit. That is, the HUD control device 2 includes a memory 1001 for storing a program that, when executed by the processor 1002, results in the steps shown in the flowchart of FIG. It can also be said that this program causes a computer to execute the procedures or methods of the eye position detection unit 21, the region change unit 22, the image generation unit 23, the angle information acquisition unit 25, and the position information acquisition unit 26.
 ここで、プロセッサ1002とは、CPU(Central Processing Unit)、処理装置、演算装置、又はマイクロプロセッサ等のことである。
 メモリ1001は、RAM(Random Access Memory)、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、又はフラッシュメモリ等の不揮発性もしくは揮発性の半導体メモリであってもよいし、ハードディスク又はフレキシブルディスク等の磁気ディスクであってもよいし、CD(Compact Disc)又はDVD(Digital Versatile Disc)等の光ディスクであってもよい。
Here, the processor 1002 is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, or the like.
The memory 1001 may be a non-volatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), a flash memory, or a hard disk or a flexible disk. The magnetic disk may be a CD (Compact Disc) or a DVD (Digital Versatile Disc).
 なお、眼位置検出部21、領域変更部22、画像生成部23、角度情報取得部25、及び位置情報取得部26の機能について、一部を専用のハードウェアで実現し、一部をソフトウェア又はファームウェアで実現するようにしてもよい。このように、HUD制御装置2における処理回路は、ハードウェア、ソフトウェア、ファームウェア、又はこれらの組み合わせによって、上述の機能を実現することができる。 In addition, about the function of the eye position detection part 21, the area | region change part 22, the image generation part 23, the angle information acquisition part 25, and the position information acquisition part 26, a part is implement | achieved with a dedicated hardware and a part is software or It may be realized by firmware. As described above, the processing circuit in the HUD control device 2 can realize the above-described functions by hardware, software, firmware, or a combination thereof.
 なお、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、各実施の形態の任意の構成要素の変形、又は各実施の形態の任意の構成要素の省略が可能である。
 また、上記では、HUD制御装置2は、HUD装置3のみを制御する機能を備える構成であったが、HUD装置3を制御する機能に加えて1つ以上の表示装置(例えば、センタディスプレイ)を制御する機能も備える構成であってもよい。即ち、車両に搭載されたHUD装置3及びセンタディスプレイ等の多様な表示装置を制御する表示制御装置に、HUD制御装置2が組み込まれている構成であってもよい。
 また、上記では、HUD制御装置2は、表示物201を虚像200としてHUD装置3に表示させる構成であったが、この表示物201に関する情報をスピーカから音声出力させる構成であってもよい。HUD制御装置2は、例えば、車両1が奥行き75m先の交差点を左折することを案内する情報を運転者に提示する場合、重畳表示領域403では上記交差点を左折することを案内する画像を虚像200の表示物201としてHUD装置3に表示させ、非重畳表示領域404では上記交差点を左折することを案内する音声をスピーカから出力させる。
In the present invention, within the scope of the invention, free combinations of the respective embodiments, modifications of arbitrary components of the respective embodiments, or omission of arbitrary components of the respective embodiments are possible.
In the above description, the HUD control device 2 has a function of controlling only the HUD device 3, but in addition to the function of controlling the HUD device 3, one or more display devices (for example, a center display) are provided. The structure which also has the function to control may be sufficient. In other words, the HUD control device 2 may be incorporated in a display control device that controls various display devices such as the HUD device 3 and the center display mounted on the vehicle.
In the above description, the HUD control device 2 is configured to display the display object 201 on the HUD device 3 as the virtual image 200. However, the HUD control device 2 may be configured to output information related to the display object 201 from a speaker. For example, when the HUD control device 2 presents to the driver information that guides the vehicle 1 to turn left at an intersection 75 m deep, the virtual display 200 displays an image that guides the operator to turn left at the intersection. The display object 201 is displayed on the HUD device 3, and in the non-overlapping display area 404, a sound is output from a speaker for guiding the user to turn left at the intersection.
 この発明に係るHUD制御装置は、HUD装置を大型化させずに、運転者の眼の位置の違いに対応するようにしたので、車両用のAR-HUD装置等を制御するHUD制御装置に用いるのに適している。 The HUD control device according to the present invention is adapted to the difference in the position of the eyes of the driver without increasing the size of the HUD device, and is therefore used for a HUD control device for controlling an AR-HUD device for a vehicle and the like. Suitable for
 1 車両、2 HUD制御装置、3 HUD装置、4 HUDシステム、5 車載装置、21 眼位置検出部、22 領域変更部、23 画像生成部、24 データベース、25 角度情報取得部、26 位置情報取得部、31 表示部、32 反射ミラー、33 反射ミラー調整部、34 HUD装置位置調整部(位置調整部)、35 筐体、51 車内カメラ、52 車外カメラ、53 GPS受信機、54 レーダセンサ、55 ECU、56 無線通信装置、57 ナビゲーション装置、100 運転者の眼、100B,100C,100F,100H,100L,100M 運転者の眼の位置、200,200a 虚像、201 表示物、202,203 表示視認領域、300 ウインドシールド(被投射面)、401,401B,401F,401H,401L,401M 表示視認領域、402 表示推奨領域、403,403B,403F,403H,403L,403M 重畳表示領域、404,404B,404F,404H,404L 非重畳表示領域、410 重畳表示不可領域、1000 処理回路、1001 メモリ、1002 プロセッサ。 1 vehicle, 2 HUD control device, 3 HUD device, 4 HUD system, 5 in-vehicle device, 21 eye position detection unit, 22 region change unit, 23 image generation unit, 24 database, 25 angle information acquisition unit, 26 position information acquisition unit , 31 display unit, 32 reflection mirror, 33 reflection mirror adjustment unit, 34 HUD device position adjustment unit (position adjustment unit), 35 housing, 51 in-vehicle camera, 52 outside camera, 53 GPS receiver, 54 radar sensor, 55 ECU 56, wireless communication device, 57 navigation device, 100 driver's eye, 100B, 100C, 100F, 100H, 100L, 100M driver's eye position, 200, 200a virtual image, 201 display object, 202, 203 display viewing area, 300 Windshield (projected surface), 401, 40 B, 401F, 401H, 401L, 401M Display viewing area, 402 Recommended display area, 403, 403B, 403F, 403H, 403L, 403M Overlaid display area, 404, 404B, 404F, 404H, 404L Non-overlapped display area, 410 Overlaid display Unusable area, 1000 processing circuit, 1001 memory, 1002 processor.

Claims (11)

  1.  画像情報を表示する表示部と、前記表示部により表示された前記画像情報を反射して被投射面に投射する反射ミラーとを備え、運転者により視認される車両の前景に前記画像情報を虚像として重畳表示するヘッドアップディスプレイ装置を制御するヘッドアップディスプレイ制御装置であって、
     前記運転者の眼の位置を検出する眼位置検出部と、
     前記表示部に表示させる画像情報を生成する画像生成部と、
     前記眼位置検出部により検出された前記運転者の眼の位置に応じて、前記画像生成部により生成された前記画像情報を前記車両の前景における現実物体に虚像として重畳表示する重畳表示領域を変更する領域変更部とを備えることを特徴とするヘッドアップディスプレイ制御装置。
    A display unit that displays image information; and a reflection mirror that reflects the image information displayed by the display unit and projects the image information onto a projection surface. The image information is displayed on a foreground of a vehicle visually recognized by a driver. A head-up display control device for controlling a head-up display device to superimpose as,
    An eye position detector for detecting the position of the eyes of the driver;
    An image generation unit for generating image information to be displayed on the display unit;
    In accordance with the position of the driver's eyes detected by the eye position detection unit, a superimposed display region in which the image information generated by the image generation unit is superimposed and displayed as a virtual image on a real object in the foreground of the vehicle is changed. A head-up display control device comprising: a region changing unit that performs the operation.
  2.  前記眼位置検出部は、前記運転者の眼の高さ方向の位置を検出し、
     前記領域変更部は、前記運転者の眼の高さ方向の位置に応じて、前記重畳表示領域を変更することを特徴とする請求項1記載のヘッドアップディスプレイ制御装置。
    The eye position detection unit detects a position in the height direction of the driver's eyes,
    The head-up display control device according to claim 1, wherein the region changing unit changes the superimposed display region according to a position in a height direction of the driver's eyes.
  3.  前記眼位置検出部は、前記運転者の眼の奥行き方向の位置を検出し、
     前記領域変更部は、前記運転者の眼の奥行き方向の位置に応じて、前記重畳表示領域を変更することを特徴とする請求項1記載のヘッドアップディスプレイ制御装置。
    The eye position detection unit detects a position of the driver's eyes in the depth direction,
    The head-up display control device according to claim 1, wherein the region changing unit changes the superimposed display region according to a position in a depth direction of the driver's eyes.
  4.  前記ヘッドアップディスプレイ装置が、前記反射ミラーのチルト角度を調整する反射ミラー調整部を備える場合であって、
     前記ヘッドアップディスプレイ制御装置は、前記反射ミラーのチルト角度を含む角度情報を前記反射ミラー調整部から取得する角度情報取得部を備え、
     前記領域変更部は、前記運転者の眼の位置、及び前記角度情報取得部により取得された前記反射ミラーのチルト角度に応じて、前記重畳表示領域を変更することを特徴とする請求項1から請求項3のうちのいずれか1項記載のヘッドアップディスプレイ制御装置。
    The head-up display device includes a reflection mirror adjustment unit that adjusts a tilt angle of the reflection mirror,
    The head-up display control device includes an angle information acquisition unit that acquires angle information including a tilt angle of the reflection mirror from the reflection mirror adjustment unit,
    The said area change part changes the said superimposition display area according to the position of the said driver | operator's eyes, and the tilt angle of the said reflective mirror acquired by the said angle information acquisition part. The head-up display control device according to claim 3.
  5.  前記ヘッドアップディスプレイ装置が、前記ヘッドアップディスプレイ装置全体又はそのうちの一部の奥行き位置又はチルト角度の少なくとも一方を調整する位置調整部を備える場合であって、
     前記ヘッドアップディスプレイ制御装置は、前記ヘッドアップディスプレイ装置全体又はそのうちの一部の、奥行き位置又はチルト角度の少なくとも一方を含む位置情報を、前記位置調整部から取得する位置情報取得部を備え、
     前記領域変更部は、前記運転者の眼の位置、及び前記位置情報取得部により取得された前記ヘッドアップディスプレイ装置全体又はそのうちの一部の奥行き位置又はチルト角度の少なくとも一方に応じて、前記重畳表示領域を変更することを特徴とする請求項1から請求項3のうちのいずれか1項記載のヘッドアップディスプレイ制御装置。
    The head-up display device includes a position adjustment unit that adjusts at least one of the depth position or the tilt angle of the entire head-up display device or a part thereof, and
    The head-up display control device includes a position information acquisition unit that acquires position information including at least one of a depth position and a tilt angle of the entire head-up display device or a part thereof, from the position adjustment unit,
    The region changing unit is configured to perform the superimposition according to at least one of a position of the driver's eyes and a depth position or a tilt angle of the entire head-up display device or a part of the head-up display device acquired by the position information acquisition unit. The head-up display control device according to any one of claims 1 to 3, wherein the display area is changed.
  6.  前記領域変更部は、前記運転者の眼の位置、及び前記画像生成部により生成された前記画像情報に応じて、前記重畳表示領域を変更することを特徴とする請求項1から請求項3のうちのいずれか1項記載のヘッドアップディスプレイ制御装置。 The area change unit changes the superimposed display area according to the position of the driver's eyes and the image information generated by the image generation unit. The head-up display control apparatus of any one of them.
  7.  前記領域変更部は、前記運転者の眼の位置、及び前記画像生成部により生成された前記画像情報に応じて、前記重畳表示領域を変更することを特徴とする請求項4記載のヘッドアップディスプレイ制御装置。 5. The head-up display according to claim 4, wherein the region changing unit changes the superimposed display region according to the position of the driver's eyes and the image information generated by the image generating unit. Control device.
  8.  前記領域変更部は、前記運転者の眼の位置、及び前記画像生成部により生成された前記画像情報に応じて、前記重畳表示領域を変更することを特徴とする請求項5記載のヘッドアップディスプレイ制御装置。 The head-up display according to claim 5, wherein the region changing unit changes the superimposed display region according to the position of the driver's eyes and the image information generated by the image generating unit. Control device.
  9.  前記画像生成部は、前記現実物体が前記重畳表示領域に存在する場合と前記重畳表示領域に存在しない場合とで、前記画像情報の表示態様を変更することを特徴とする請求項1記載のヘッドアップディスプレイ制御装置。 2. The head according to claim 1, wherein the image generation unit changes a display mode of the image information depending on whether the real object exists in the superimposed display area or not in the superimposed display area. Up-display control device.
  10.  画像情報を表示する表示部と、前記表示部により表示された前記画像情報を反射して被投射面に投射する反射ミラーとを備え、運転者により視認される車両の前景に前記画像情報を虚像として重畳表示するヘッドアップディスプレイ装置と、
     前記運転者の眼の位置を検出する眼位置検出部と、前記表示部に表示させる画像情報を生成する画像生成部と、前記眼位置検出部により検出された前記運転者の眼の位置に応じて、前記画像生成部により生成された前記画像情報を前記車両の前景における現実物体に虚像として重畳表示する重畳表示領域を変更する領域変更部とを備えるヘッドアップディスプレイ制御装置とを有するヘッドアップディスプレイシステム。
    A display unit that displays image information; and a reflection mirror that reflects the image information displayed by the display unit and projects the image information onto a projection surface. The image information is displayed on a foreground of a vehicle visually recognized by a driver. A head-up display device that superimposes and displays as
    According to an eye position detection unit that detects the position of the driver's eyes, an image generation unit that generates image information to be displayed on the display unit, and the position of the driver's eyes detected by the eye position detection unit A head-up display control device comprising: a head-up display control device including a region changing unit that changes a superimposed display region that superimposes and displays the image information generated by the image generation unit as a virtual image on a real object in the foreground of the vehicle. system.
  11.  画像情報を表示する表示部と、前記表示部により表示された前記画像情報を反射して被投射面に投射する反射ミラーとを備え、運転者により視認される車両の前景に前記画像情報を虚像として重畳表示するヘッドアップディスプレイ装置を制御するヘッドアップディスプレイ制御方法であって、
     眼位置検出部が、前記運転者の眼の位置を検出するステップと、
     画像生成部が、前記表示部に表示させる画像情報を生成するステップと、
     領域変更部が、前記眼位置検出部により検出された前記運転者の眼の位置に応じて、前記画像生成部により生成された前記画像情報を前記車両の前景における現実物体に虚像として重畳表示する重畳表示領域を変更するステップとを備えることを特徴とするヘッドアップディスプレイ制御方法。
    A display unit that displays image information; and a reflection mirror that reflects the image information displayed by the display unit and projects the image information onto a projection surface. The image information is displayed on a foreground of a vehicle visually recognized by a driver. A head-up display control method for controlling a head-up display device for superimposing display as
    An eye position detecting unit detecting the position of the driver's eyes;
    An image generating unit generating image information to be displayed on the display unit;
    An area changing unit superimposes and displays the image information generated by the image generation unit as a virtual image on a real object in the foreground of the vehicle according to the eye position of the driver detected by the eye position detection unit. And a step of changing the superimposed display area.
PCT/JP2018/019695 2018-05-22 2018-05-22 Head-up display control device, head-up display system, and head-up display control method WO2019224922A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/019695 WO2019224922A1 (en) 2018-05-22 2018-05-22 Head-up display control device, head-up display system, and head-up display control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/019695 WO2019224922A1 (en) 2018-05-22 2018-05-22 Head-up display control device, head-up display system, and head-up display control method

Publications (1)

Publication Number Publication Date
WO2019224922A1 true WO2019224922A1 (en) 2019-11-28

Family

ID=68616850

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/019695 WO2019224922A1 (en) 2018-05-22 2018-05-22 Head-up display control device, head-up display system, and head-up display control method

Country Status (1)

Country Link
WO (1) WO2019224922A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112114427A (en) * 2020-09-08 2020-12-22 中国第一汽车股份有限公司 HUD projection height adjusting method, device and equipment and vehicle
CN112947761A (en) * 2021-03-26 2021-06-11 芜湖汽车前瞻技术研究院有限公司 Virtual image position adjusting method, device and storage medium of AR-HUD system
CN113109941A (en) * 2020-01-10 2021-07-13 未来(北京)黑科技有限公司 Layered imaging head-up display system
CN114779470A (en) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 Display method of augmented reality head-up display system
CN114816292A (en) * 2021-01-27 2022-07-29 本田技研工业株式会社 Head-up display control system and head-up display method
CN114821723A (en) * 2022-04-27 2022-07-29 江苏泽景汽车电子股份有限公司 Projection image plane adjusting method, device, equipment and storage medium
JP7456290B2 (en) 2020-05-28 2024-03-27 日本精機株式会社 heads up display device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014210537A (en) * 2013-04-19 2014-11-13 トヨタ自動車株式会社 Head-up display device
JP2015060180A (en) * 2013-09-20 2015-03-30 日本精機株式会社 Head-up display device
JP2016101805A (en) * 2014-11-27 2016-06-02 パイオニア株式会社 Display device, control method, program and storage medium
WO2017090464A1 (en) * 2015-11-25 2017-06-01 日本精機株式会社 Head-up display
WO2017138242A1 (en) * 2016-02-12 2017-08-17 日立マクセル株式会社 Image display device for vehicle
WO2018030320A1 (en) * 2016-08-10 2018-02-15 日本精機株式会社 Vehicle display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014210537A (en) * 2013-04-19 2014-11-13 トヨタ自動車株式会社 Head-up display device
JP2015060180A (en) * 2013-09-20 2015-03-30 日本精機株式会社 Head-up display device
JP2016101805A (en) * 2014-11-27 2016-06-02 パイオニア株式会社 Display device, control method, program and storage medium
WO2017090464A1 (en) * 2015-11-25 2017-06-01 日本精機株式会社 Head-up display
WO2017138242A1 (en) * 2016-02-12 2017-08-17 日立マクセル株式会社 Image display device for vehicle
WO2018030320A1 (en) * 2016-08-10 2018-02-15 日本精機株式会社 Vehicle display device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113109941A (en) * 2020-01-10 2021-07-13 未来(北京)黑科技有限公司 Layered imaging head-up display system
CN113109941B (en) * 2020-01-10 2023-02-10 未来(北京)黑科技有限公司 Layered imaging head-up display system
JP7456290B2 (en) 2020-05-28 2024-03-27 日本精機株式会社 heads up display device
CN112114427A (en) * 2020-09-08 2020-12-22 中国第一汽车股份有限公司 HUD projection height adjusting method, device and equipment and vehicle
CN114816292A (en) * 2021-01-27 2022-07-29 本田技研工业株式会社 Head-up display control system and head-up display method
CN112947761A (en) * 2021-03-26 2021-06-11 芜湖汽车前瞻技术研究院有限公司 Virtual image position adjusting method, device and storage medium of AR-HUD system
CN112947761B (en) * 2021-03-26 2023-07-28 芜湖汽车前瞻技术研究院有限公司 Virtual image position adjustment method, device and storage medium of AR-HUD system
CN114779470A (en) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 Display method of augmented reality head-up display system
CN114821723A (en) * 2022-04-27 2022-07-29 江苏泽景汽车电子股份有限公司 Projection image plane adjusting method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2019224922A1 (en) Head-up display control device, head-up display system, and head-up display control method
RU2746380C2 (en) Windshield indicator with variable focal plane
US10852818B2 (en) Information provision device and information provision method
JP6830936B2 (en) 3D-LIDAR system for autonomous vehicles using dichroic mirrors
CN111433067B (en) Head-up display device and display control method thereof
JP6201690B2 (en) Vehicle information projection system
US9849832B2 (en) Information presentation system
JP6342704B2 (en) Display device
US11525694B2 (en) Superimposed-image display device and computer program
JP6981377B2 (en) Vehicle display control device, vehicle display control method, and control program
US20190241070A1 (en) Display control device and display control method
JP2010143520A (en) On-board display system and display method
JP6945933B2 (en) Display system
US11325470B2 (en) Method, device and computer-readable storage medium with instructions for controlling a display of an augmented-reality head-up display device for a transportation vehicle
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
JP6225379B2 (en) Vehicle information projection system
US20210152812A1 (en) Display control device, display system, and display control method
JP2008236711A (en) Driving support method and driving support device
JP6186905B2 (en) In-vehicle display device and program
JP2018020779A (en) Vehicle information projection system
JP6873350B2 (en) Display control device and display control method
KR101637298B1 (en) Head-up display apparatus for vehicle using aumented reality
JP2020158014A (en) Head-up display device, display control device, and display control program
WO2021171397A1 (en) Display control device, display device, and display control method
JP2018167669A (en) Head-up display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919434

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18919434

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP