WO2022124028A1 - ヘッドアップディスプレイ - Google Patents
ヘッドアップディスプレイ Download PDFInfo
- Publication number
- WO2022124028A1 WO2022124028A1 PCT/JP2021/042271 JP2021042271W WO2022124028A1 WO 2022124028 A1 WO2022124028 A1 WO 2022124028A1 JP 2021042271 W JP2021042271 W JP 2021042271W WO 2022124028 A1 WO2022124028 A1 WO 2022124028A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light
- region
- image
- light source
- generation unit
- Prior art date
Links
- 230000003287 optical effect Effects 0.000 claims abstract description 37
- 239000004973 liquid crystal related substance Substances 0.000 claims abstract description 6
- 230000005540 biological transmission Effects 0.000 abstract 1
- 239000011295 pitch Substances 0.000 description 20
- 238000004891 communication Methods 0.000 description 14
- 238000012937 correction Methods 0.000 description 14
- 238000000034 method Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 7
- 239000000758 substrate Substances 0.000 description 7
- 230000000052 comparative effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 101001093748 Homo sapiens Phosphatidylinositol N-acetylglucosaminyltransferase subunit P Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- B60K35/233—
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B3/00—Simple or compound lenses
- G02B3/0006—Arrays
- G02B3/0037—Arrays characterized by the distribution or form of lenses
- G02B3/0056—Arrays characterized by the distribution or form of lenses arranged along two different directions in a plane, e.g. honeycomb arrangement of lenses
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B3/00—Simple or compound lenses
- G02B3/02—Simple or compound lenses with non-spherical faces
-
- B60K2360/23—
-
- B60K2360/31—
-
- B60K2360/332—
-
- B60K2360/334—
-
- B60K2360/343—
-
- B60K2360/785—
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B3/00—Simple or compound lenses
- G02B2003/0093—Simple or compound lenses characterised by the shape
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/013—Head-up displays characterised by optical features comprising a combiner of particular shape, e.g. curvature
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0132—Head-up displays characterised by optical features comprising binocular systems
- G02B2027/0134—Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
Definitions
- This disclosure relates to a head-up display.
- a head-up display can be used to achieve visual communication between the vehicle and the occupants.
- the head-up display can realize so-called AR (Augmented Reality) by projecting an image or video on a windshield or combiner and superimposing the image on the real space through the windshield or combiner so that the occupant can see it. ..
- Patent Document 1 discloses a display device including an optical system for displaying a three-dimensional virtual image using a transparent display medium.
- the display device projects light into the driver's field of view on the windshield or combiner. Some of the projected light passes through the windshield or combiner, while the other part is reflected by the windshield or combiner. This reflected light goes to the driver's eyes. The driver sees the reflected light in his eyes as an image of an object on the other side (outside the car) across the windshield or combiner against the background of a real object that can be seen through the windshield or combiner. Perceive as a virtual image.
- the head-up display configured to display a given image.
- An image generation unit that emits light for generating the predetermined image, and an image generation unit.
- a mirror that reflects the light so that the light emitted by the image generation unit is applied to the transmitting member is provided.
- the image generation unit Light source and An optical member that transmits light from the light source, It has a liquid crystal display portion in which an original image for forming the predetermined image is generated by the light emitted from the optical member.
- the original image is formed in a shape corresponding to the distortion of the predetermined image.
- the optical member is formed in a shape that matches the shape of the original image.
- the head-up display is A heads-up display configured to display a given image.
- An image generation unit that emits light for generating the predetermined image, and an image generation unit.
- a mirror that reflects the light so that the light emitted by the image generation unit is applied to the transmitting member is provided.
- the image generation unit With multiple light sources It has at least a single optical member that transmits light from each of the plurality of light sources and emits the light.
- the plurality of light sources are arranged at a pitch matching the shape of the mirror so that the light emitted from the single optical member is diffused and incident on the mirror.
- a head-up display capable of improving the visibility of a virtual image.
- FIG. 3A shows an example of the exit surface image generated by the image generation part of the HUD which concerns on a comparative example. It is a figure when the exit surface image shown in FIG. 3A is displayed as a virtual image. It is a figure which shows an example of the exit surface image generated by the image generation part of the HUD which concerns on this embodiment. It is a figure which shows the virtual image object which is recognized when the emission surface image shown in FIG. 4 is reflected by a concave mirror.
- horizontal direction is a direction including the "left direction” and the “right direction”.
- the "vertical direction” is a direction including “upward” and “downward”.
- the "front-back direction” is a direction including the "forward direction” and the “backward direction”.
- the left-right direction is a direction orthogonal to the up-down direction and the front-back direction.
- FIG. 1 is a block diagram of the vehicle system 2.
- the vehicle 1 equipped with the vehicle system 2 is a vehicle (automobile) capable of traveling in the automatic driving mode.
- the vehicle system 2 communicates wirelessly with a vehicle control unit 3, a sensor 5, a camera 6, a radar 7, an HMI (Human Machine Interface) 8, a GPS (Global Positioning System) 9, and a GPS (Global Positioning System) 9.
- a unit 10 and a storage device 11 are provided.
- the vehicle system 2 includes a steering actuator 12, a steering device 13, a brake actuator 14, a brake device 15, an accelerator actuator 16, and an accelerator device 17.
- the vehicle system 2 includes a HUD 20.
- the vehicle control unit 3 is configured to control the running of the vehicle 1.
- the vehicle control unit 3 is composed of, for example, at least one electronic control unit (ECU: Electronic Control Unit).
- the electronic control unit includes a computer system including one or more processors and a memory (for example, a SoC (System on a Chip) or the like), and an electronic circuit composed of an active element such as a transistor and a passive element such as a resistor.
- the processor includes, for example, at least one of a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), and a TPU (Tensor Processing Unit).
- the CPU may be composed of a plurality of CPU cores.
- the GPU may be composed of a plurality of GPU cores.
- the memory includes a ROM (Read Only Memory) and a RAM (Random Access Memory).
- the vehicle control program may be stored in the ROM.
- the vehicle control program may include an artificial intelligence (AI) program for autonomous driving.
- AI is a program (trained model) constructed by supervised or unsupervised machine learning (particularly deep learning) using a multi-layer neural network.
- the RAM may temporarily store a vehicle control program, vehicle control data, and / or peripheral environment information indicating the surrounding environment of the vehicle 1.
- the processor may be configured to develop a program designated from various vehicle control programs stored in the ROM on the RAM and execute various processes in cooperation with the RAM.
- the computer system may be configured by a non-Neuman type computer such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field-Programmable Gate Array). Further, the computer system may be composed of a combination of a Von Neumann computer and a non-Von Neumann computer.
- a non-Neuman type computer such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field-Programmable Gate Array).
- the computer system may be composed of a combination of a Von Neumann computer and a non-Von Neumann computer.
- the sensor 5 includes at least one of an acceleration sensor, a speed sensor and a gyro sensor.
- the sensor 5 is configured to detect the traveling state of the vehicle 1 and output the traveling state information to the vehicle control unit 3.
- the sensor 5 includes a seating sensor that detects whether the driver is sitting in the driver's seat, a face orientation sensor that detects the direction of the driver's face, an external weather sensor that detects the external weather condition, and whether or not there is a person in the vehicle.
- a motion sensor or the like for detection may be further provided.
- the camera 6 is, for example, a camera including an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary MOS).
- the camera 6 includes an external camera 6A and an internal camera 6B.
- the external camera 6A is configured to acquire image data indicating the surrounding environment of the vehicle 1 and then transmit the image data to the vehicle control unit 3.
- the vehicle control unit 3 acquires surrounding environment information based on the transmitted image data.
- the surrounding environment information may include information on an object (pedestrian, other vehicle, sign, etc.) existing outside the vehicle 1.
- the surrounding environment information may include information on the attributes of the object existing outside the vehicle 1 and information on the distance and position of the object with respect to the vehicle 1.
- the external camera 6A may be configured as a monocular camera or a stereo camera.
- the internal camera 6B is arranged inside the vehicle 1 and is configured to acquire image data indicating an occupant.
- the internal camera 6B functions as, for example, an eye tracking camera that tracks the occupant's viewpoint E (described later in FIG. 2).
- the internal camera 6B is provided, for example, in the vicinity of the rear-view mirror or inside the instrument panel.
- the radar 7 includes at least one of a millimeter wave radar, a microwave radar, and a laser radar (for example, a LiDAR unit).
- the LiDAR unit is configured to detect the surrounding environment of the vehicle 1.
- the LiDAR unit is configured to acquire 3D mapping data (point cloud data) indicating the surrounding environment of the vehicle 1 and then transmit the 3D mapping data to the vehicle control unit 3.
- the vehicle control unit 3 identifies the surrounding environment information based on the transmitted 3D mapping data.
- the HMI 8 is composed of an input unit that receives an input operation from the driver and an output unit that outputs driving information and the like to the driver.
- the input unit includes a steering wheel, an accelerator pedal, a brake pedal, an operation mode changeover switch for switching the operation mode of the vehicle 1, and the like.
- the output unit is a display (excluding the HUD) that displays various driving information.
- the GPS 9 is configured to acquire the current position information of the vehicle 1 and output the acquired current position information to the vehicle control unit 3.
- the wireless communication unit 10 receives information about another vehicle around the vehicle 1 (for example, driving information) from the other vehicle, and transmits information about the vehicle 1 (for example, driving information) to the other vehicle. It is configured (vehicle-to-vehicle communication). Further, the wireless communication unit 10 is configured to receive infrastructure information from infrastructure equipment such as traffic lights and indicator lights and to transmit traveling information of vehicle 1 to the infrastructure equipment (road-to-vehicle communication). Further, the wireless communication unit 10 receives information about the pedestrian from the portable electronic device (smartphone, tablet, wearable device, etc.) carried by the pedestrian, and transmits the own vehicle traveling information of the vehicle 1 to the portable electronic device. It is configured to do (pedestrian-to-vehicle communication).
- the vehicle 1 may directly communicate with another vehicle, infrastructure equipment, or a portable electronic device in an ad hoc mode, or may communicate via an access point. Further, the vehicle 1 may communicate with another vehicle, infrastructure equipment, or a portable electronic device via a communication network (not shown).
- the communication network includes at least one of the Internet, a local area network (LAN), a wide area network (WAN) and a radio access network (RAN).
- the wireless communication standard is, for example, Wi-Fi (registered trademark), Bluetooth (registered trademark), ZigBee (registered trademark), LPWA, DSRC (registered trademark) or Li-Fi.
- the vehicle 1 may communicate with another vehicle, infrastructure equipment, or a portable electronic device by using a fifth generation mobile communication system (5G).
- 5G fifth generation mobile communication system
- the storage device 11 is an external storage device such as a hard disk drive (HDD) or SSD (Solid State Drive).
- the storage device 11 may store two-dimensional or three-dimensional map information and / or a vehicle control program.
- the three-dimensional map information may be composed of 3D mapping data (point cloud data).
- the storage device 11 is configured to output map information and a vehicle control program to the vehicle control unit 3 in response to a request from the vehicle control unit 3.
- the map information and the vehicle control program may be updated via the wireless communication unit 10 and the communication network.
- the vehicle control unit 3 When the vehicle 1 travels in the automatic driving mode, the vehicle control unit 3 has at least one of the steering control signal, the accelerator control signal, and the brake control signal based on the traveling state information, the surrounding environment information, the current position information, the map information, and the like. Generate one automatically.
- the steering actuator 12 is configured to receive a steering control signal from the vehicle control unit 3 and control the steering device 13 based on the received steering control signal.
- the brake actuator 14 is configured to receive a brake control signal from the vehicle control unit 3 and control the brake device 15 based on the received brake control signal.
- the accelerator actuator 16 is configured to receive an accelerator control signal from the vehicle control unit 3 and control the accelerator device 17 based on the received accelerator control signal.
- the vehicle control unit 3 automatically controls the travel of the vehicle 1 based on the travel state information, the surrounding environment information, the current position information, the map information, and the like. That is, in the automatic driving mode, the traveling of the vehicle 1 is automatically controlled by the vehicle system 2.
- the vehicle control unit 3 when the vehicle 1 travels in the manual driving mode, the vehicle control unit 3 generates a steering control signal, an accelerator control signal, and a brake control signal according to the manual operation of the driver with respect to the accelerator pedal, the brake pedal, and the steering wheel.
- the steering control signal, the accelerator control signal, and the brake control signal are generated by the manual operation of the driver, so that the traveling of the vehicle 1 is controlled by the driver.
- the operation mode consists of an automatic operation mode and a manual operation mode.
- the automatic driving mode includes, for example, a fully automatic driving mode, an advanced driving support mode, and a driving support mode.
- the vehicle system 2 automatically performs all driving controls such as steering control, brake control, and accelerator control, and the driver is not in a state where the vehicle 1 can be driven.
- the vehicle system 2 automatically performs all driving controls of steering control, brake control, and accelerator control, and the driver is in a state where the vehicle 1 can be driven but does not drive the vehicle 1.
- the driving support mode the vehicle system 2 automatically performs some driving control of steering control, brake control, and accelerator control, and the driver drives the vehicle 1 under the driving support of the vehicle system 2.
- the manual driving mode the vehicle system 2 does not automatically control the driving, and the driver drives the vehicle 1 without the driving support of the vehicle system 2.
- the HUD 20 directs the HUD information to the occupants of the vehicle 1 so that predetermined information (hereinafter referred to as HUD information) is superimposed on the real space outside the vehicle 1 (particularly, the surrounding environment in front of the vehicle 1). It is configured to be displayed as an image.
- the HUD information displayed by the HUD 20 is, for example, related to vehicle running information related to the running of the vehicle 1 and / or surrounding environment information related to the surrounding environment of the vehicle 1 (particularly, related to an object existing outside the vehicle 1). Information) etc.
- the HUD 20 is an AR display that functions as a visual interface between the vehicle 1 and the occupants.
- the HUD 20 includes an image generation unit 24 and a control unit 25.
- the image generation unit (PGU: Picture Generation Unit) 24 is configured to emit light for generating a predetermined image displayed to the occupant of the vehicle 1.
- the image generation unit 24 can emit light for generating a change image that changes according to the situation of the vehicle 1, for example.
- the control unit 25 controls the operation of each unit of the HUD 20.
- the control unit 25 is connected to the vehicle control unit 3 and controls the operation of each unit of the HUD 20 such as the image generation unit 24 based on the vehicle travel information, the surrounding environment information, and the like transmitted from the vehicle control unit 3.
- the control unit 25 is equipped with a processor such as a CPU and a memory, and the processor executes a computer program read from the memory to control the operation of the image generation unit 24 and the like.
- the vehicle control unit 3 and the control unit 25 are provided as separate configurations, but the vehicle control unit 3 and the control unit 25 may be integrally configured.
- the vehicle control unit 3 and the control unit 25 may be configured by a single electronic control unit.
- FIG. 2 is a schematic view of the HUD 20 as viewed from the side surface side of the vehicle 1.
- the HUD 20 has at least a part of the HUD 20 located inside the vehicle 1.
- the HUD 20 is installed at a predetermined position in the room of the vehicle 1.
- the HUD 20 may be located within the dashboard of vehicle 1.
- the HUD 20 includes a HUD main body 21.
- the HUD main body 21 has a main body housing 22 and an exit window 23.
- the exit window 23 is made of a transparent plate that allows visible light to pass through.
- the HUD main body 21 has an image generation unit 24, a control unit 25, and a concave mirror 26 (an example of a mirror) inside the main body housing 22.
- the image generation unit 24 is installed in the main body housing 22 so as to face the front of the HUD 20.
- the image generation unit 24 has a light emitting surface 110 (an example of a liquid crystal unit) that emits light for generating an image toward the outside.
- the light emitting surface 110 is provided with a predetermined light emitting region 110A that emits light for generating a predetermined image displayed toward the occupant of the vehicle 1.
- the predetermined light emission region 110A will be described later with reference to FIG.
- the concave mirror 26 is arranged on the optical path of the light emitted from the image generation unit 24.
- the concave mirror 26 is configured to reflect the light emitted from the image generation unit 24 toward the windshield 18 (for example, the front window of the vehicle 1).
- the concave mirror 26 has a reflecting surface curved in a concave shape in order to form a predetermined image, and reflects an image of light emitted from the image generation unit 24 and formed at a predetermined magnification.
- the concave mirror 26 may have, for example, a drive mechanism 27, and may be configured so that the position and orientation of the concave mirror 26 can be changed based on a control signal transmitted from the control unit 25.
- the control unit 25 generates a control signal for controlling the operation of the image generation unit 24 based on the vehicle travel information, the surrounding environment information, and the like transmitted from the vehicle control unit 3, and generates the generated control signal. It is transmitted to the image generation unit 24. Further, the control unit 25 may generate a control signal for changing the position and orientation of the concave mirror 26, and may transmit the generated control signal to the drive mechanism 27.
- the light emitted from the light emitting surface 110 of the image generation unit 24 is reflected by the concave mirror 26 and emitted from the exit window 23 of the HUD main body unit 21.
- the light emitted from the exit window 23 of the HUD main body 21 is applied to the windshield 18 which is a transmissive member. A part of the light emitted from the exit window 23 to the windshield 18 is reflected toward the occupant's viewpoint E.
- the occupant recognizes the light emitted from the HUD main body 21 as a virtual image (predetermined image) formed at a predetermined distance in front of the windshield 18.
- the occupant can see the virtual image object I formed by the predetermined image on the road located outside the vehicle. It can be visually recognized as if it were floating.
- the occupant's viewpoint E may be either the occupant's left eye viewpoint or the right eye viewpoint.
- the viewpoint E may be defined as the midpoint of a line segment connecting the viewpoint of the left eye and the viewpoint of the right eye.
- the position of the occupant's viewpoint E is specified, for example, based on the image data acquired by the internal camera 6B.
- the position of the viewpoint E of the occupant may be updated at a predetermined cycle, or may be determined only once when the vehicle 1 is started.
- a predetermined image is projected so as to be a virtual image of a single distance arbitrarily determined.
- a 3D image stereo image
- a plurality of predetermined images that are the same as or different from each other are projected so as to be virtual images at different distances.
- the distance of the virtual image object I (distance from the viewpoint E of the occupant to the virtual image) adjusts the distance from the image generation unit 24 to the viewpoint E of the occupant (for example, the distance between the image generation unit 24 and the concave mirror 26). It can be adjusted as appropriate by adjusting).
- the virtual image object I recognized by the occupant as a predetermined image is distorted due to the reflection of the concave mirror 26. Therefore, in order for the occupant to accurately recognize the information of the virtual image object I, for example, it is desirable to correct the distortion of the virtual image object I that occurs.
- FIG. 3A is an image generated by the light emitted from the image generation unit of the HUD according to the comparative example, the image on the light emission surface 310 of the image generation unit, that is, the image generated by the light before being reflected by the concave mirror. It is a figure which shows an example of 312 (hereinafter, also referred to as an emission surface image). Further, FIG. 3B is a diagram showing a virtual image object X recognized by the occupant as a predetermined image after the emission surface image 312 shown in FIG. 3A is reflected by the concave mirror. The images shown in FIGS. 3A and 3B display information indicating the traveling speed (50 km / h) of the own vehicle.
- a predetermined correction process is applied to the distortion caused by the reflection of the emission surface image 312 on the light emission surface 310 of the image generation unit according to the comparative example by a normal image, for example, a concave mirror.
- a normal image for example, a concave mirror.
- the virtual image object X generated by the light reflected by the concave mirror is visually recognized as an image having a distorted shape as shown in FIG. 3B.
- the virtual image object X is visually recognized as a curved image in which the upper side thereof is extended and the lower side is contracted.
- the emission surface image is subjected to reverse correction processing (also referred to as correction processing by warping) in advance. Is given).
- FIG. 4 is a diagram showing an example of an exit surface image 112 generated by the light emitted from the image generation unit 24 of the HUD 20.
- FIG. 5 is a diagram showing a virtual image object I that is recognized by the occupant as a predetermined image after the emission surface image 112 shown in FIG. 4 is reflected by the concave mirror 26.
- the light emitting surface 110 of the image generation unit 24 is formed in a rectangular shape, and is provided with a predetermined light emitting region 110A that emits light for generating a predetermined image. Then, in the predetermined light emission region 110A, the emission surface image 112 is generated by the light emitted from the predetermined light emission region 110A. In the emission surface image 112 of this example, a speed image for notifying that the current traveling speed is 50 km / h is displayed, as in the comparative example shown in FIGS. 3A and 3B.
- the predetermined light emission region 110A of the rectangular light emission surface 110 is formed as, for example, an annular fan-shaped emission region.
- the annular fan-shaped predetermined light emission region 110A is an emission region that forms a rectangular display range 114 in which the virtual image object I shown in FIG. 5 is displayed.
- the predetermined light emitting region 110A is formed so as to occupy a region in which the annular fan shape is maximized on the light emitting surface 110, for example, in order to form a large display range 114.
- the emission surface image 112 of the predetermined light emission region 110A is corrected by warping.
- the emission surface image 112 of the predetermined light emission region 110A is corrected in advance by the amount of distortion caused by the reflection of the concave mirror 26, and the upper side of the image is stretched. Corrections have been made to shrink the underside.
- the degree of distortion generated in the virtual image object I based on the reflection by the concave mirror 26 becomes smaller as it approaches the central region of the virtual image object I, and the end region away from the central region. It gets bigger. Therefore, the amount of correction by warping applied to the emission surface image 112 that is the original image of the virtual image object I differs depending on the position of the emission surface image 112 according to the magnitude of the degree of distortion depending on the portion of the virtual image object I. ..
- the correction amount of the emission surface image of the region corresponding to the center of the virtual image object I is relatively small, and the correction amount of the emission surface image of the region corresponding to the end portion away from the center of the virtual image object I is relatively small. Becomes larger.
- the emission surface image 112 which is the original image for forming a predetermined image, is formed in a shape to which a reverse correction process is performed in which the image is distorted in the reverse direction in advance by the amount distorted by the reflection by the concave mirror 26. ing. Therefore, when the light that generates the emission surface image 112 is reflected by the concave mirror 26, it is visually recognized as, for example, a horizontally long rectangular virtual image object I without distortion, as shown in FIG.
- FIG. 6 is a horizontal cross-sectional view of the image generation unit 24A included in the HUD 20A.
- FIG. 7 is a schematic view of the image generation unit 24A as viewed from the front side (light emitting surface 110 side).
- the image generation unit 24A includes a light source substrate 120 on which a plurality of light sources 121 (in this example, seven light sources of the first light source 121A to the seventh light source 121G) are mounted, and the front side of the light source 121.
- a lens 130 (an example of an optical member) arranged in the lens 130 and a light source surface 110 arranged on the front side of the lens 130 are provided.
- the image generation unit 24A further includes a lens holder 140 arranged on the front side of the light source substrate 120, a heat sink 150 arranged on the rear side of the light source substrate 120, and a PGU housing 160.
- the light source 121 (first light source 121A to seventh light source 121G) is, for example, a laser light source or an LED light source.
- the laser light source is, for example, an RGB laser light source configured to emit a red laser light, a green light laser light, and a blue laser light, respectively.
- the first light source 121A to the seventh light source 121G are arranged on the light source substrate 120 at a distance of a certain distance in the left-right direction.
- the light source substrate 120 is, for example, a printed circuit board made of an insulator in which wiring of an electric circuit is printed on the surface or inside of the plate.
- the lens 130 has an incident surface 132 on which the light from the light source 121 is incident and an exit surface 133 on which the incident light is emitted.
- the lens 130 is, for example, an aspherical convex lens in which both the entrance surface 132 and the exit surface 133 are formed in a convex shape.
- the lens 130 is configured to transmit or reflect the light emitted from the light source 121 and emit it toward the light emitting surface 110.
- a prism, a diffuser plate, a magnifying glass, or the like may be appropriately added to the lens 130 that functions as an optical member.
- the lens 130 is configured by arranging seven aspherical convex lenses corresponding to the first light source 121A to the seventh light source 121G in parallel in the left-right direction. A part of adjacent aspherical convex lenses of the lens 130 are connected in parallel.
- the lens 130 is emitted from a first region 131A that transmits the first light emitted from the first light source 121A, a second region 131B that transmits the second light emitted from the second light source 121B, and a third light source 121C.
- the incident surface 132F and the incident surface 132G of the seventh region 131G are rearwardly convex incident surfaces.
- the emission surface 133A of the first region 131A, the emission surface 133B of the second region 131B, the emission surface 133C of the third region 131C, the emission surface 133D of the fourth region 131D, the emission surface 133E of the fifth region 131E, and the sixth region 131F are rearwardly convex incident surfaces.
- the exit surface 133F and the exit surface 133G of the seventh region 131G are forwardly convex exit surfaces.
- the lens 130 is attached to the lens holder 140 so that the center of the light emitting surface of the first light source 121A to the seventh light source 121G is at the focal position.
- the light emitting surface 110 is a liquid crystal display, a DMD (Digital Mirror Device), or the like.
- the light emitting surface 110 forms light for generating an image by the light of the light source 121 transmitted through the lens 130.
- the light emitting surface 110 is attached to the front surface of the PGU housing 160 with the emitting surface facing forward of the image generation unit 24A.
- the drawing method of the image generation unit 24A may be a raster scan method, a DLP method, or an LCOS method.
- the light source 121 of the image generation unit 24A may be an LED light source.
- the light source 121 of the image generation unit 24A may be a white LED light source.
- the lens holder 140 holds the lens 130 in the PGU housing 160 so that the light emitted from the light source 121 is correctly incident on the incident surface 132 of the lens 130.
- the heat sink 150 is made of aluminum or copper having high thermal conductivity.
- the heat sink 150 is provided so as to come into contact with the back surface of the light source substrate 120 in order to dissipate heat generated from the light source substrate 120.
- the light emitted from the first light source 121A to the seventh light source 121G is incident on the incident surfaces 132A to 132G of the lens 130. Since the shape of the lens 130 is a shape in which seven aspherical convex lenses are coupled in parallel as described above, most of the light emitted from the first light source 121A is the lens 130, for example, as shown in the first optical path 122A. It is incident on the first region 131A of the above, becomes light parallel to the optical axis 125A, is emitted from the first region 131A, and is incident on the light emitting surface 110.
- most of the light emitted from the second light source 121B to the seventh light source 121G is incident on the second region 131B to the seventh region 131G, respectively, and the second light source 121B to the seventh light source 121G Light parallel to each optical axis is incident on the light emitting surface 110.
- the lens 130 is formed by stacking seven aspherical convex lenses arranged in parallel in the left-right direction in a plurality of stages in the vertical direction corresponding to the light source.
- the lens 130 of this example has a first region 131A to a seventh region 131G (an example of a convex portion) arranged in parallel in the left-right direction corresponding to the first light source 121A to the seventh light source 121G, and the eighth light source 121H to the eighth light source.
- the eighth region 131H to the fourteenth region 131N (an example of the convex surface portion) arranged in parallel in the left-right direction corresponding to the fourteen light sources 121N are formed in two stages in the vertical direction.
- Each light source 121 shown by a broken line is arranged behind the lens 130.
- An annular fan-shaped predetermined light emission region 110A is formed on the light emission surface 110, and the emission surface image (50 km / h) which is the original image of the predetermined image forming the virtual image object I is formed in the predetermined light emission region 110A. 112 has been generated. Then, the emission surface image 112 is subjected to correction processing by warping.
- the lens 130 in which the first region 131A to the seventh region 131G and the eighth region 131H to the fourteenth region 131N are stacked in two stages in the vertical direction has the shape of the emission surface image 112 corrected by warping. It is formed in a combined shape. Specifically, the lens 130 is formed in a curved shape in accordance with the shape of the exit surface image 112 that has been corrected by warping.
- the first region 131A to the seventh region 131G of the lens 130 are arranged so that a virtual line connecting the centers of the exit surface 133A to the exit surface 133G when viewed from the front is a curved line.
- the eighth region 131H to the fourteenth region 131N of the lens 130 are arranged so that the virtual lines connecting the centers of the emission surface 133H to the emission surface 133N when viewed from the front form a curved line.
- the virtual line connecting these light sources is corrected by warping on the emission surface image 112. It is arranged so as to be curved according to the shape.
- the eighth light source 121H to the fourteenth light source 121N corresponding to the eighth region 131H to the fourteenth region 131N are also arranged so that the virtual lines connecting these light sources are curved.
- the fourth region 131D arranged in the central portion is a lens that emits light for forming the central region of the emission surface image 112.
- the first region 131A and the seventh region 131G arranged at the end portion away from the central portion are light for forming the end region of the emission surface image 112. It is a lens that emits light.
- the eleventh region 131K arranged in the central portion is a lens that emits light for forming the central region of the emission surface image 112. be.
- the eighth region 131H to the fourteenth region 131N the eighth region 131H and the fourteenth region 131N arranged at the end portion away from the central portion form the end region of the emission surface image 112. It is a lens that emits the light of.
- the degree of distortion generated in the virtual image object I based on the reflection by the concave mirror 26 is smaller as it approaches the central region of the virtual image object I, and is larger as it is farther from the central region. Therefore, the amount of correction by warping applied to the emission surface image 112 that is the original image of the virtual image object I is small in the region corresponding to the center of the virtual image object I in the emission surface image 112, and is small in the emission surface image 112. It increases in the region corresponding to the end of the virtual image object I away from the center.
- the first region 131A to the fourteenth region 131N of the lens 130 are a region for emitting light for forming the central region of the emission surface image 112 and a region for emitting light for forming a peripheral region of the emission surface image 112. It is preferable that the lenses are formed so as to have different shapes.
- the first region 131A to the fourteenth region 131N are each emitted according to the degree of distortion generated in each region (central region, end region, intermediate region thereof) of the virtual image object I based on the reflection by the concave mirror 26.
- the surfaces constituting the surfaces 133A to 133N may be configured to have different curvatures from each other.
- the image generation unit 24A that emits light for generating a predetermined image and the light emitted by the image generation unit 24A are applied to the windshield 18.
- a concave mirror 26 that reflects light is provided.
- the image generation unit 24A generates a light emitting surface 110 that generates an original image for forming a predetermined image by the light source 121, the lens 130 that transmits the light from the light source 121, and the light emitted from the lens 130. It has a predetermined light emission region 110A and the like.
- the original image is formed in a shape corresponding to the distortion of a predetermined image
- the lens 130 is formed in a shape matching the shape of the original image.
- the emission surface image 112 which is the original image, is formed in advance in a shape that corrects the distortion of a predetermined image generated by the reflection of the emission surface image 112 by the concave mirror 26.
- the lens 130 is formed in a shape that matches the shape of the emission surface image 112 when viewed from the light emission surface 110 side.
- the emission surface image 112 (original image) is placed on the predetermined light emission region 110A.
- the output surface image 112 to be displayed is subjected to reverse correction processing (correction processing by warping) in advance.
- the shape of the lens 130 (first region 131A to seventh region 131G and eighth region 131H to fourteenth region 131N) is formed to match the shape of the exit surface image 112. It is possible to improve the utilization efficiency of the light emitted from the light source with respect to the predetermined light emission region 110A on which the emission surface image 112 corrected by warping is displayed. As a result, the visibility of the virtual image object I can be improved.
- the shape of the lens 130 is a curved shape.
- the emission surface image 112 original image
- the lens 130 By forming the lens 130 into a curved shape according to the curved shape of the emission surface image 112, it is possible to easily improve the utilization efficiency of the light emitted from the lens 130 toward the predetermined light emission region 110A of the light emission surface 110. Can be done.
- the light source 121 includes the first light source 121A to the fourteenth light source 121N, and the lens 130 has a plurality of convex surface portions that transmit light from each of the first light source 121A to the fourteenth light source 121N.
- the first region 131A to the fourteenth region 131N are included.
- the first light source 121A to the seventh light source 121G and the eighth light source 121H to the fourteenth light source 121N are arranged in a curved line when viewed from the light emitting surface 110 side, and the first region 131A to the seventh region
- the 131G and the eighth region 131H to the fourteenth region 131N are arranged in a curved shape when viewed from the light source surface 110 side.
- the utilization efficiency of the light emitted to the predetermined light emission region 110A of the light emission surface 110 is used. Can be improved.
- the predetermined image (virtual image object I) is formed in a horizontally long rectangular shape, and the degree of distortion of the edge region of the predetermined image is larger than the degree of distortion of the central region of the predetermined image. Then, depending on the difference between the degree of distortion in the central region and the degree of distortion in the end region, the convex portion arranged corresponding to the central region among the plurality of convex portions 131A to the fourteenth region 131N. The shape is different from that of the convex portion arranged corresponding to the end region.
- the end region of the virtual image object I is more likely to be distorted by reflection from the concave mirror 26 than the central region.
- the convex surface portion for example, the fourth region 131D, the eleventh region 131K
- the convex surface portion for example, the first region 131A, the seventh region 131G, the eighth region
- the distortion of the image can be appropriately corrected.
- FIG. 8 is a schematic view of the image generation unit 24B included in the HUD 20B as viewed from above. As shown in FIG. 8, in the case of the image generation unit 24B as well, similarly to the image generation unit 24A of the first embodiment, a plurality of light sources and lenses configured to correspond to these light sources are provided. ing.
- the first light source 221A to the fifth light source 221E are provided.
- the first light source 221A to the fifth light source 221E are arranged in parallel in the left-right direction.
- the lens 230 is a single lens in which five aspherical convex lenses corresponding to the first light source 221A to the fifth light source 221E are arranged in parallel along the left-right direction, and a part of adjacent aspherical convex lenses is connected in parallel. It is a lens.
- the lens 230 emits from a first region 231A that transmits the first light emitted from the first light source 221A, a second region 231B that transmits the second light emitted from the second light source 221B, and a third light source 221C.
- the third region 231C that transmits the third light
- the fourth region 231D that transmits the fourth light emitted from the fourth light source 221D
- the fifth region that transmits the fifth light emitted from the fifth light source 221E. It has a region 231E.
- the incident surface 232A of the first region 231A, the incident surface 232B of the second region 231B, the incident surface 232C of the third region 231C, the incident surface 232D of the fourth region 231D, and the incident surface 232E of the fifth region 231E are convex rearward. Is the incident surface of.
- the exit surface 233A of the first region 231A, the exit surface 233B of the second region 231B, the exit surface 233C of the third region 231C, the exit surface 233D of the fourth region 231D, and the exit surface 233E of the fifth region 231E are convex forward. Is the exit surface of.
- the first light source 221A to the fifth light source 221E are irradiated from the first light source 221A to the fifth light source 221E, pass through the lens 230, and the light emitted from the exit surface 233 of the lens 230 is diffused toward the concave mirror 26. They are arranged at a pitch that matches the shape of the concave mirror 26 so as to proceed.
- the pitches P1 to P4 of the first light source 221A to the fifth light source 221E are shorter than the pitches P5 to P8 of the vertices of the exit surfaces 233A to 233E of the lens 230. Have been placed.
- the pitch P1 between the first light source 221A and the second light source 221B is shorter than the pitch P5 between the apex of the exit surface 233A of the first region 231A and the apex of the exit surface 233B of the second region 231B in the lens 230.
- the pitch P2 between the second light source 221B and the third light source 221C is shorter than the pitch P6 between the apex of the exit surface 233B of the second region 231B of the lens 230 and the apex of the exit surface 233C of the third region 231C.
- the pitch P3 between the third light source 221C and the fourth light source 221D is shorter than the pitch P7 between the apex of the exit surface 233C of the third region 231C of the lens 230 and the apex of the exit surface 233D of the fourth region 231D.
- the pitch P4 between the fourth light source 221D and the fifth light source 221E is shorter than the pitch P8 between the apex of the exit surface 233D of the fourth region 231D of the lens 230 and the apex of the exit surface 233E of the fifth region 231E.
- the light emitted from the first light source 221A to the fifth light source 221E, passing through the lens 230, and emitted from the exit surface 233 of the lens 230 is substantially perpendicular to the concave mirror 26. It is arranged so as to be incident on.
- the light passing on the optical axis of the lens 230 or the light passing through a path close to the path of the light passing on the optical axis is incident substantially perpendicular to the concave mirror 26. Have been placed.
- the first light source 221A is emitted from the first light source 221A, passes through the first region 231A of the lens 230, and emits light from the emission surface 233A on the optical axis or the optical axis in the first region 231A.
- the light L1 passing through a path close to the path of the passing light is arranged so as to be incident substantially perpendicular to the concave mirror 26.
- the second light source 221B is on the optical axis or the optical axis in the second region 231B of the light emitted from the emission surface 233B, which is irradiated from the second light source 221B and passes through the second region 231B of the lens 230.
- the light L2 passing through a path close to the path of the light passing through the concave mirror 26 is arranged so as to be incident substantially perpendicular to the concave mirror 26.
- the third light source 221C is the light that is emitted from the third light source 221C, passes through the third region 231C of the lens 230, and is emitted from the emission surface 233C, and the light that passes on the optical axis or the optical axis in the third region 231C.
- the light L3 passing through a path close to the path of No. 2 is arranged so as to be incident substantially perpendicular to the concave mirror 26.
- the fourth light source 221D is the light that is emitted from the fourth light source 221D, passes through the fourth region 231D of the lens 230, and is emitted from the emission surface 233D, and is the light that passes on the optical axis or the optical axis in the fourth region 231D.
- the light L4 passing through a path close to the path of No. 2 is arranged so as to be incident substantially perpendicular to the concave mirror 26.
- the fifth light source 221E is the light that is emitted from the fifth light source 221E, passes through the fifth region 231E of the lens 230, and is emitted from the emission surface 233E, and is the light that passes on the optical axis or the optical axis in the fifth region 231E.
- the light L5 passing through a path close to the path of No. 2 is arranged so as to be incident substantially perpendicular to the concave mirror 26.
- the first region 231A to the fifth region 231E (an example of the convex surface portion) of the lens 230 are for forming a region for emitting light for forming a central region of the emission surface image and a peripheral region for the emission surface image. It is formed so as to have a different shape from the region that emits light.
- the shape of the third region 231C that emits light for forming the central region of the emission surface image is formed so as to be symmetrical.
- the shapes of the first region 231A, the second region 231B, the fourth region 231D, and the fifth region 231E that emit light for forming the peripheral region of the emission surface image are formed to be asymmetrical.
- the degree of asymmetry is greater in the region that emits light to form the edges of the exit surface image.
- the asymmetry of the first region 231A and the fifth region 231E is more than the degree of asymmetry of the second region 231B and the fourth region 231D. Is greater.
- FIG. 9 is a diagram showing the asymmetry of the shape of the first region 231A in the lens 230.
- the first region 231A is a curved surface on the left side (far side from the second region 231B) of the curved surface 233A2 on the right side (the side closer to the second region 231B) of the exit surface 233A.
- the 233A1 is formed so that the inclination (curvature) is gentler. That is, the curvature of the curved surface 233A1 on the left side is smaller than the curvature of the curved surface 233A2 on the right side of the exit surface 233A.
- the second region 231B is formed so that the curvature on the side closer to the first region 231A is smaller than the curvature on the side closer to the third region 231C on the exit surface 233B. ..
- the difference in curvature between the left side and the right side of the exit surface is formed so that the emission surface 233A of the first region 231A is larger than the emission surface 233B of the second region 231B.
- the curvature of the fifth region 231E on the right side is larger than the curvature on the left side (the side closer to the fourth region 231D) on the exit surface 233E. It is formed to be small. Further, the fourth region 231D is formed so that the curvature on the side closer to the fifth region 231E on the exit surface 233D is smaller than the curvature on the side closer to the third region 231C. The difference in curvature between the left side and the right side of the exit surface is formed so that the exit surface 233E of the fifth region 231E is larger than the exit surface 233D of the fourth region 231D.
- FIG. 8 shows a case where the image generation unit 24B is viewed from above.
- the shape of the lens is similarly different for each region. You may.
- the lens has a convex surface portion stacked in a plurality of stages in the vertical direction, a convex surface portion for emitting light for forming a central region of the emission surface image and an upper and lower end region of the emission surface image are formed.
- the shape may be different from that of the convex portion that emits light for the purpose.
- the pitch Px between the light sources 321A and 321E and the emission surfaces 333A to 333E in the first region 331A to the fifth region 331E of the lens 330 The pitch Py between the vertices of the (convex surface portion) was set to the same pitch. Further, the curvatures of the emission surfaces 333A to 333E are set so that the light La to Le emitted from the first region 331A to the fifth region 331E are parallel to the optical axis of each light source 321A to 321E.
- the image generation unit 24B that emits light for generating a predetermined image and the light emitted by the image generation unit 24B are applied to the windshield 18. Also includes a concave mirror 26 that reflects light. Then, the image generation unit 24A transmits the light from each of the first light source 221A to the fifth light source 221E and the first light source 221A to the fifth light source 221E, and emits a single lens 230. At least, the first light source 221A to the fifth light source 221E are arranged at a pitch matching the shape of the concave mirror 26 so that the light emitted from the single lens 230 is diffused and incident on the concave mirror 26. ..
- the optical member for obtaining diffused light can be configured by a single lens 230, it is not necessary to add another member such as a diffuser plate, and it is possible to realize miniaturization and cost reduction of HUD20B. can.
- the single lens 230 emits light from each of the first light source 221A to the fifth light source 221E along the parallel direction of the first light source 221A to the fifth light source 221E. It has a first region 231A to a fifth region 231E of the lens 230, which are a plurality of parallel convex portions.
- the first light source 221A to the fifth light source 221E are arranged so that the pitch of the first light source 221A to the fifth light source 221E is shorter than the pitch of each vertex of the first region 231A to the fifth region 231E of the lens 230. Has been done.
- the light emitted from each region 231A to 231E of the lens 230 toward the concave mirror 26 is more than the case where the pitch between the light sources and the pitch between the vertices of the exit surface (convex surface portion) of the lens are the same. Can be diffused. This makes it possible to improve the uniformity of the luminance distribution of the virtual image object I.
- the light emitted from each of the first region 231A to the fifth region 231E of the lens 230 is configured to be vertically incident on the concave mirror 26. According to this configuration, the light can be uniformly reflected over the entire concave mirror 26, so that the uniformity of the luminance distribution of the virtual image object I can be further improved.
- the predetermined image (virtual image object I) is formed in a horizontally long rectangular shape, and is predetermined among the first region 231A to the fifth region 231E of the lens 230 which is a plurality of parallel convex surface portions.
- the shape of the region that emits light for forming the central portion of the image and the region that emits light for forming the end portion of a predetermined image are different.
- the light for forming the end portion of a predetermined image can also be incidented in a state close to perpendicular to the concave mirror 26, further improving the uniformity of the luminance distribution of the virtual image object I. Can be done.
- the light emitted from the image generation unit 24 is configured to be reflected by the concave mirror 26 and irradiated to the windshield 18, but the present invention is not limited to this.
- the light reflected by the concave mirror 26 may be applied to a combiner (not shown) provided inside the windshield 18.
- the combiner is composed of a transmissive member such as a transparent plastic disk. A part of the light emitted from the image generation unit 24 of the HUD main body 21 to the combiner is reflected toward the occupant's viewpoint E as in the case of irradiating the windshield 18 with light.
- the HUD is mounted on an automobile, but the present invention is not limited to this.
- the HUD may be mounted on a motorcycle, a railroad, an aircraft, or the like.
- the vehicle driving mode has been described as including a fully automated driving mode, an advanced driving support mode, a driving support mode, and a manual driving mode. It should not be limited to one mode.
- the driving mode of the vehicle may include at least one of these four modes. For example, only one of the driving modes of the vehicle may be executed.
- the classification and display form of the driving mode of the vehicle may be appropriately changed in accordance with the laws and regulations related to automatic driving in each country.
- the definitions of "fully automatic driving mode”, “advanced driving support mode”, and “driving support mode” described in the description of this embodiment are merely examples, and the laws and regulations related to automatic driving in each country or In line with the rules, these definitions may be changed accordingly.
Abstract
Description
将来の自動運転社会では、車両と人間との間の視覚的コミュニケーションが益々重要になっていくことが予想される。例えば、車両と当該車両の乗員との間の視覚的コミュニケーションが益々重要になっていくことが予想される。この点において、ヘッドアップディスプレイ(HUD)を用いて車両と乗員との間の視覚的コミュニケーションを実現することができる。ヘッドアップディスプレイは、ウインドシールドやコンバイナに画像や映像を投影させ、その画像をウインドシールドやコンバイナを通して現実空間と重畳させて乗員に視認させることで、いわゆるAR(Augmented Reality)を実現することができる。
所定の画像を表示するように構成されたヘッドアップディスプレイであって、
前記所定の画像を生成するための光を出射する画像生成部と、
前記画像生成部により出射された前記光が透過部材へ照射されるように、前記光を反射させるミラーと、を備え、
前記画像生成部は、
光源と、
前記光源からの光を透過する光学部材と、
前記光学部材から出射された光により前記所定の画像を形成するための元画像が生成される液晶部と、を有し、
前記元画像は、前記所定の画像の歪みに対応する形状に形成されており、
前記光学部材は、前記元画像の前記形状に合わせた形状に形成されている。
所定の画像を表示するように構成されたヘッドアップディスプレイであって、
前記所定の画像を生成するための光を出射する画像生成部と、
前記画像生成部により出射された前記光が透過部材へ照射されるように、前記光を反射させるミラーと、を備え、
前記画像生成部は、
複数の光源と、
前記複数の光源のそれぞれからの光を透過して、前記光を出射する単一の光学部材と、を少なくとも有し、
前記複数の光源は、前記単一の光学部材から出射される光が拡散して前記ミラーに入射するように、前記ミラーの形状に合わせたピッチで配置されている。
画像生成部(PGU:Picture Generation Unit)24は、車両1の乗員に向けて表示される所定の画像を生成するための光を出射するように構成されている。画像生成部24は、例えば、車両1の状況に応じて変化する変化画像を生成するための光を出射可能である。
次に、図6及び図7を参照して、第一実施形態に係るHUD20Aについて説明する。
図6は、HUD20Aが備える画像生成部24Aの水平断面図である。図7は、画像生成部24Aを前面側(光出射面110側)から見た概略図である。
図6に示すように、画像生成部24Aは、複数の光源121(本例では、第一光源121A~第七光源121Gの7個の光源)が搭載された光源基板120と、光源121の前側に配置されるレンズ130(光学部材の一例)と、レンズ130の前側に配置される光出射面110とを備える。画像生成部24Aは、さらに、光源基板120の前側に配置されるレンズホルダ140と、光源基板120の後側に配置されるヒートシンク150と、PGUハウジング160とを備える。
図8及び図9を参照して、第二実施形態に係るHUD20Bについて説明する。
図8は、HUD20Bが備える画像生成部24Bを上方から見た概略図である。図8に示すように、画像生成部24Bの場合にも、上記第一実施形態の画像生成部24Aと同様に、複数の光源と、これらの光源に対応するように構成されたレンズが設けられている。
Claims (8)
- 所定の画像を表示するように構成されたヘッドアップディスプレイであって、
前記所定の画像を生成するための光を出射する画像生成部と、
前記画像生成部により出射された前記光が透過部材へ照射されるように、前記光を反射させるミラーと、を備え、
前記画像生成部は、
光源と、
前記光源からの光を透過する光学部材と、
前記光学部材から出射された光により前記所定の画像を形成するための元画像が生成される液晶部と、を有し、
前記元画像は、前記所定の画像の歪みに対応する形状に形成されており、
前記光学部材は、前記元画像の前記形状に合わせた形状に形成されている、ヘッドアップディスプレイ。 - 前記光学部材の前記形状は、曲線形状である、請求項1に記載のヘッドアップディスプレイ。
- 前記光源は、複数の光源を含み、
前記光学部材は、前記複数の光源のそれぞれからの光を透過する複数の凸面部を含み、
前記複数の光源が、前記液晶部側から見て、曲線状に配置されているとともに、前記複数の凸面部が、前記液晶部側から見て、曲線状に配置されている、請求項2に記載のヘッドアップディスプレイ。 - 前記所定の画像は横長矩形状に形成され、前記所定の画像の端部領域の歪み度が前記所定の画像の中心領域の歪み度よりも大きく、
前記中心領域の歪み度と前記端部領域の歪み度の差に応じて、前記複数の凸面部のうち前記中心領域に対応して配置された凸面部と前記端部領域に対応して配置された凸面部との形状を異ならせる、請求項3に記載のヘッドアップディスプレイ。 - 所定の画像を表示するように構成されたヘッドアップディスプレイであって、
前記所定の画像を生成するための光を出射する画像生成部と、
前記画像生成部により出射された前記光が透過部材へ照射されるように、前記光を反射させるミラーと、を備え、
前記画像生成部は、
複数の光源と、
前記複数の光源のそれぞれからの光を透過して、前記光を出射する単一の光学部材と、を少なくとも有し、
前記複数の光源は、前記単一の光学部材から出射される光が拡散して前記ミラーに入射するように、前記ミラーの形状に合わせたピッチで配置されている、ヘッドアップディスプレイ。 - 前記単一の光学部材は、前記複数の光源のそれぞれからの前記光を出射するように前記複数の光源の並列方向に沿って並列された複数の凸面部を有し、
前記複数の光源は、当該複数の光源のピッチが前記複数の凸面部の各頂点のピッチよりも短くなるように、配置されている、請求項5に記載のヘッドアップディスプレイ。 - 前記複数の凸面部のそれぞれから出射される前記光が前記ミラーに対して垂直に入射するように構成されている、請求項6に記載のヘッドアップディスプレイ。
- 前記所定の画像は横長矩形状に形成され、
並列された前記複数の凸面部のうち、前記所定の画像の中心領域を形成するための光を出射する凸面部と前記所定の画像の端部領域を形成するための光を出射する凸面部との形状を異ならせる、請求項6または7に記載のヘッドアップディスプレイ。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180082909.4A CN116601035A (zh) | 2020-12-09 | 2021-11-17 | 平视显示器 |
US18/266,425 US20240036311A1 (en) | 2020-12-09 | 2021-11-17 | Head-up display |
EP21903132.5A EP4261593A1 (en) | 2020-12-09 | 2021-11-17 | Head-up display |
JP2022568142A JPWO2022124028A1 (ja) | 2020-12-09 | 2021-11-17 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-204214 | 2020-12-09 | ||
JP2020204214 | 2020-12-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022124028A1 true WO2022124028A1 (ja) | 2022-06-16 |
Family
ID=81974389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/042271 WO2022124028A1 (ja) | 2020-12-09 | 2021-11-17 | ヘッドアップディスプレイ |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240036311A1 (ja) |
EP (1) | EP4261593A1 (ja) |
JP (1) | JPWO2022124028A1 (ja) |
CN (1) | CN116601035A (ja) |
WO (1) | WO2022124028A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017151404A (ja) * | 2016-02-23 | 2017-08-31 | 株式会社デンソー | ヘッドアップディスプレイ装置 |
JP2018045103A (ja) | 2016-09-14 | 2018-03-22 | パナソニックIpマネジメント株式会社 | 表示装置 |
JP2020098270A (ja) * | 2018-12-18 | 2020-06-25 | 株式会社デンソー | 虚像表示装置 |
WO2020233529A1 (zh) * | 2019-05-17 | 2020-11-26 | 未来(北京)黑科技有限公司 | 抬头显示系统、主动发光像源、抬头显示器和机动车 |
JP2020204214A (ja) | 2019-06-18 | 2020-12-24 | 株式会社竹中工務店 | 泥岩集合体の処理方法、スレーキング防止方法及び残土の重金属浸出防止方法 |
-
2021
- 2021-11-17 JP JP2022568142A patent/JPWO2022124028A1/ja active Pending
- 2021-11-17 US US18/266,425 patent/US20240036311A1/en active Pending
- 2021-11-17 CN CN202180082909.4A patent/CN116601035A/zh active Pending
- 2021-11-17 WO PCT/JP2021/042271 patent/WO2022124028A1/ja active Application Filing
- 2021-11-17 EP EP21903132.5A patent/EP4261593A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017151404A (ja) * | 2016-02-23 | 2017-08-31 | 株式会社デンソー | ヘッドアップディスプレイ装置 |
JP2018045103A (ja) | 2016-09-14 | 2018-03-22 | パナソニックIpマネジメント株式会社 | 表示装置 |
JP2020098270A (ja) * | 2018-12-18 | 2020-06-25 | 株式会社デンソー | 虚像表示装置 |
WO2020233529A1 (zh) * | 2019-05-17 | 2020-11-26 | 未来(北京)黑科技有限公司 | 抬头显示系统、主动发光像源、抬头显示器和机动车 |
JP2020204214A (ja) | 2019-06-18 | 2020-12-24 | 株式会社竹中工務店 | 泥岩集合体の処理方法、スレーキング防止方法及び残土の重金属浸出防止方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022124028A1 (ja) | 2022-06-16 |
EP4261593A1 (en) | 2023-10-18 |
US20240036311A1 (en) | 2024-02-01 |
CN116601035A (zh) | 2023-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017072841A1 (ja) | 情報表示装置 | |
WO2020110580A1 (ja) | ヘッドアップディスプレイ、車両用表示システム、及び車両用表示方法 | |
WO2021065617A1 (ja) | 車両用表示システム及び車両 | |
US11597316B2 (en) | Vehicle display system and vehicle | |
US20180180879A1 (en) | Information display device and vehicle apparatus | |
US20220365345A1 (en) | Head-up display and picture display system | |
JP2023175794A (ja) | ヘッドアップディスプレイ | |
US20210347259A1 (en) | Vehicle display system and vehicle | |
WO2021015171A1 (ja) | ヘッドアップディスプレイ | |
WO2022124028A1 (ja) | ヘッドアップディスプレイ | |
WO2021220955A1 (ja) | 画像生成装置及びヘッドアップディスプレイ | |
US20240069335A1 (en) | Head-up display | |
JP2020149063A (ja) | ヘッドアップディスプレイ装置 | |
WO2021220722A1 (ja) | 画像生成装置及びヘッドアップディスプレイ | |
WO2022009605A1 (ja) | 画像生成装置及びヘッドアップディスプレイ | |
WO2023190338A1 (ja) | 画像照射装置 | |
US20220197025A1 (en) | Vehicular head-up display and light source unit used therefor | |
WO2023085230A1 (ja) | 画像生成装置及びヘッドアップディスプレイ | |
WO2021065438A1 (ja) | ヘッドアップディスプレイ | |
WO2023216670A1 (zh) | 立体显示装置和交通工具 | |
WO2023184276A1 (zh) | 一种显示方法、显示系统和终端设备 | |
CN117369127A (zh) | 虚像显示装置、图像数据的生成方法、装置和相关设备 | |
CN118033971A (en) | Projection system, projection method and vehicle | |
CN116206083A (zh) | 显示方法、显示设备、交通工具、存储介质及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21903132 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022568142 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180082909.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18266425 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021903132 Country of ref document: EP Effective date: 20230710 |