WO2018168418A1 - Display device, display method, and program - Google Patents

Display device, display method, and program Download PDF

Info

Publication number
WO2018168418A1
WO2018168418A1 PCT/JP2018/007001 JP2018007001W WO2018168418A1 WO 2018168418 A1 WO2018168418 A1 WO 2018168418A1 JP 2018007001 W JP2018007001 W JP 2018007001W WO 2018168418 A1 WO2018168418 A1 WO 2018168418A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual image
display
landscape
displaying
displayed
Prior art date
Application number
PCT/JP2018/007001
Other languages
French (fr)
Japanese (ja)
Inventor
圭介 岩脇
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to JP2019505833A priority Critical patent/JPWO2018168418A1/en
Publication of WO2018168418A1 publication Critical patent/WO2018168418A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators

Definitions

  • the present invention relates to a display device, a display method, and a program.
  • the present invention has been made in view of the above points, and an object of the present invention is to provide a display device capable of performing virtual image display with reduced discomfort even when an obstacle exists in the landscape. To do.
  • the invention according to claim 1 is a display device, wherein a display unit that displays a virtual image at at least one of a plurality of positions having different depths in a landscape, and a position of an object in the landscape are acquired.
  • An object position acquisition unit, and the display unit overlaps one virtual image currently displayed by the object position acquired by the object position acquisition unit and is positioned in front of the one virtual image. The display of the virtual image of 1 is stopped.
  • the invention according to claim 5 is a display method executed by a display device capable of displaying a virtual image, the display step displaying a virtual image at at least one position among a plurality of positions having different depths in a landscape, An object position acquisition step for acquiring the position of the object in the landscape, and the position of the object acquired in the object position acquisition step overlaps with the currently displayed one virtual image and is positioned in front of the one virtual image. And a stop step of stopping the display of the first virtual image.
  • the invention according to claim 6 is a program, a display step for displaying a virtual image at at least one position among a plurality of positions having different depths in a landscape on a computer having a display device capable of displaying a virtual image.
  • the invention according to claim 7 is a display device, wherein the display unit displays a virtual image at at least one position among a plurality of positions having different depths in the landscape, and acquires the position of the object in the landscape.
  • An object position acquisition unit, and the display unit when the position of the object acquired by the object position acquisition unit overlaps a part of one virtual image currently displayed in the depth direction in the landscape, A portion of the virtual image of one that is located behind the surface of the object is displayed in a display mode that is less conspicuous than a portion that is positioned in front of the surface of the object, or of the virtual image of the one than the surface of the object The display of the part located in the back is stopped.
  • the invention according to claim 8 is a display method executed by a display device capable of displaying a virtual image, wherein the display step displays the virtual image at at least one of a plurality of positions having different depths in the landscape, An object position acquisition step for acquiring the position of an object in the landscape, and a case where the position of the object acquired in the object position acquisition step overlaps a part of one currently displayed virtual image in the depth direction in the landscape.
  • the part of the virtual image of the one that is located behind the surface of the object is displayed in a display mode that is less conspicuous than the part of the virtual image that is located in front of the surface of the object, or the surface of the object of the virtual image of the one
  • a display changing step for stopping the display of the portion located further back.
  • the invention according to claim 9 is a program, a display step of displaying a virtual image at at least one position among a plurality of positions having different depths in a landscape on a computer having a display device capable of displaying a virtual image.
  • the portion of the one virtual image that is located behind the surface of the object is displayed in a display mode that is less conspicuous than the portion that is located in front of the surface of the object.
  • FIG. 12 is a flowchart illustrating a virtual image display processing routine according to the second embodiment.
  • 6 is a diagram illustrating an example of a display position of a virtual image in Embodiment 2.
  • FIG. 10 is a block diagram illustrating a configuration of a display device according to a modification example of Example 2.
  • 10 is a diagram illustrating an example of a display position of a virtual image in a modification example of Example 2.
  • FIG. 1A is a diagram schematically showing the configuration of the display device 10 of this embodiment.
  • the display device 10 is a display device that displays a virtual image superimposed on a landscape.
  • the display device 10 is configured as a head-up display mounted on a moving body such as a vehicle.
  • a vehicle equipped with the display device 10 is referred to as “own vehicle”.
  • the display area of the virtual image that the display device 10 displays in the scenery in front of the host vehicle is referred to as display areas V1 to V3 (shown by broken lines in the figure. These are collectively referred to as a virtual image display area IV).
  • the eye position of an observer who observes is defined as a viewpoint EY.
  • a direction along the line of sight of an observer who observes a virtual image is a z-axis direction, and a direction perpendicular to the z-axis direction and perpendicular to each other is an x-axis direction and a y-axis direction.
  • the x-axis direction is also referred to as the horizontal direction, the y-axis direction as the height direction, and the z-axis direction as the depth direction.
  • the direction from the viewpoint EY in the z-axis direction toward the virtual image display area IV is also referred to as the front.
  • the display device 10 includes a light source 20, a screen group 30, a reflection member 40, an image data generation unit 50, and a drive unit 60.
  • the light source 20 is a light source that irradiates light for displaying a virtual image in the virtual image display region IV, and includes, for example, a scanning laser projector that irradiates laser light.
  • the light source 20 has an emission part EP as an emission point of the laser light, and irradiates a predetermined irradiation area with the emission light L1 emitted from the emission part EP.
  • the screen group 30 includes screens S1, S2, and S3.
  • Each of the screens S1 to S3 has a flat plate shape, and is a state variable type screen that changes the state between a transmission state in which the outgoing light L1 from the light source 20 is transmitted and a scattering state in which the outgoing light L1 is scattered. Since the outgoing light L1 is scattered in the scattering state, the screen is in the display state. On the other hand, since the outgoing light L1 is allowed to pass through without being scattered in the transmissive state, the screen is not displayed.
  • the screens S1 to S3 are composed of, for example, a liquid crystal film in which a liquid crystal layer containing liquid crystal molecules and an electrode layer for switching the state of liquid crystal molecules in the liquid crystal layer are laminated, a translucent plate containing microlenses, and the like.
  • the screens S1 to S3 are all disposed in the irradiation region of the emitted light L1, and have a projection region (not shown) that projects a virtual image upon receiving the irradiation of the emitted light L1.
  • the screens S1 to S3 are arranged such that the distance from the emission part EP of the light source 20 to the projection area is different from each other.
  • the reflecting member 40 is a reflecting member that reflects the projection light L2 that is the irradiated light irradiated on the projection areas of the screens S1 to S3, and is disposed on a straight line that passes through all of the screens S1 to S3.
  • the reflecting member 40 is composed of, for example, an image combiner having translucency with respect to visible light.
  • the reflection member 40 has a concave surface portion 41 arranged to face the projection areas of the screens S1 to S3.
  • the concave surface portion 41 functions as a concave mirror for the projection light L2.
  • the outgoing light L1 emitted from the light source 20 is incident on the screens S1 to S3.
  • the light incident on the screens S1 to S3 is emitted as projection light L2 from the surface opposite to the irradiation surface of the emitted light L1 in a direction perpendicular to the opposite surface of each screen. Then, the projection light L2 travels toward the reflecting member 40.
  • the observer When the observer observes the reflecting member 40 from the viewpoint EY, the observer can visually recognize the virtual image in the virtual image display area IV located on the back side (forward in the z-axis direction) of the reflecting member 40. Specifically, the observer reflects the virtual images displayed in the display areas V1, V2, and V3 having different distances in the depth direction according to the distances between the screens S1, S2, and S3 and the reflecting member 40, respectively. It will be visually recognized through the member 40.
  • the image data generation unit 50 generates image data VD to be projected on the screens S1 to S3 in order to display a virtual image in the virtual image display area IV.
  • the image data generation unit 50 generates image data VD based on the peripheral information indicating the surrounding information of the vehicle and the map information, and supplies the image data VD to the drive unit 60 and the control unit 70.
  • the drive unit 60 is a drive unit that drives the light source 20 and the screen group 30.
  • the drive unit 60 generates a light source drive signal DS1 for driving the light source 20 based on the image data VD generated by the image data generation unit 50, and supplies the light source drive signal DS1 to the light source 20.
  • the light source 20 generates emission light L1 corresponding to the image data VD according to the light source drive signal DS1, and irradiates the irradiation area.
  • the driving unit 60 generates a screen driving signal DS2 according to control by the control unit 70 and supplies the screen driving signal DS2 to the screen group 30.
  • Each of the screens S1 to S3 of the screen group 30 is switched between a transmission state and a scattering state by the screen drive signal DS2.
  • the control unit 70 determines the display position of the virtual image based on the image data VD, and controls the driving of the screens S1 to S3 by the driving unit 60 so that the virtual image is displayed at the determined display position.
  • the control unit 70 generates a control signal CS for controlling the driving of the screens S 1 to S 3 by the driving unit 60 and supplies the control signal CS to the driving unit 60.
  • FIG. 1B is a block diagram illustrating a configuration of the control unit 70.
  • the control unit 70 includes a host vehicle position detection unit 11, a front detection unit 12, a relative position calculation unit 13, a virtual image position setting unit 14, and a storage unit 15.
  • the own vehicle position acquisition unit 11 is composed of, for example, a GPS (Global Positioning System) device, receives radio waves transmitted from a plurality of GPS satellites, and calculates a distance from each GPS satellite based on the received radio waves. Get the location information of the vehicle.
  • GPS Global Positioning System
  • the front detection unit 12 detects an object existing within a predetermined distance in front of the host vehicle and its position.
  • the front detection unit 12 is constituted by a radar device such as a pulse radar, for example, emits radio waves in front of the host vehicle, detects an object, and measures the distance and direction to the object.
  • the front detection part 12 is not limited to when comprised from a radar.
  • the front detection unit 12 may include a camera that captures the front of the host vehicle, and may detect the presence and position of an object based on image recognition.
  • the relative position calculation unit 13 performs virtual image display based on the position information of the vehicle acquired by the vehicle position acquisition unit 11, the map information including the surrounding area of the vehicle, and the information of the object detected by the front detection unit 12.
  • the relative position between the guidance object that is the guidance object or the object of alerting by virtual image display (hereinafter simply referred to as the object) and the own vehicle is calculated.
  • the relative position calculation unit 13 calculates the relative position between the subject vehicle and the other object (for example, the forward vehicle) existing in front of the subject vehicle as well as the virtual image display target. That is, the relative position calculation unit 13 acquires the relative position of the object in the front landscape with the own vehicle.
  • the virtual image position setting unit 14 sets the display position of the virtual image based on the image content indicated by the image data VD and the relative position between the vehicle and the object.
  • the virtual image position setting unit 14 sets which of the display areas V1 to V3 is the virtual image display position as the display position in the z-axis direction (that is, the depth direction). Further, the virtual image position setting unit 14 sets display positions of virtual images in the x-axis direction and the y-axis direction that are plane directions in each display region.
  • the virtual image position setting unit 14 sets a position close to the guidance object or the alerting object as the display position of the virtual image. However, another object (an obstacle such as a forward vehicle) is detected at a position that overlaps with the display position (that is, the virtual image display position set as a position close to the guidance object or the alerting object). In this case, the virtual image position setting unit 14 sets a position in front of the other object as a virtual image display position.
  • the storage unit 15 is an information holding unit that appropriately stores data necessary for the processing of the control unit 70 and data generated in the processing.
  • the storage unit 15 includes a storage device such as a hard disk, flash memory, SSD (Solid State Drive), and RAM (Random Access Memory).
  • the display device 10 is (1) both when a new virtual image is displayed and (2) when the situation ahead of the host vehicle is changed while the virtual image is already displayed.
  • the virtual image display position is set or reset based on the position of an object (such as a forward vehicle) positioned in front of the vehicle.
  • an object such as a forward vehicle
  • the own vehicle position acquisition unit 11 acquires position information indicating the current position of the own vehicle (step S101).
  • the relative position calculation unit 13 refers to the map information around the host vehicle read from the storage unit 15 (step S102), and calculates the relative position between the host vehicle and the guidance object (for example, an intersection or a store) (step S103). ).
  • the virtual image position setting unit 14 calculates the display position of the virtual image such that the guidance object and the virtual image overlap in the landscape based on the relative position calculated by the relative position calculation unit 13 (step S104).
  • the front detection unit 12 detects an obstacle (for example, a front vehicle) within a predetermined distance range in front of the host vehicle (step S105).
  • an obstacle for example, a front vehicle
  • the relative position calculation unit 13 determines whether there is an obstacle such as a forward vehicle in the range (step S106). If it determines with there being no obstruction (step S106: No), it will progress to step S109.
  • step S106 If it is determined that there is an obstacle (step S106: Yes), the relative position calculation unit 13 calculates the relative position between the detected obstacle and the host vehicle (step S107).
  • the virtual image position setting unit 14 determines whether the detected obstacle overlaps the virtual image display position calculated in step S104 and is positioned in front of the virtual image display position (step S108).
  • step S108 If it is determined that there is no obstacle at a position that overlaps the display position of the virtual image and is in front of the display position (step S108: No), the virtual image position setting unit 14 uses the display position calculated in step S104 as the display position of the virtual image. Determine (step S109).
  • step S108 if it is determined that there is an obstacle at a position that overlaps the display position of the virtual image and is in front of the display position (step S108: Yes), the virtual image position setting unit 14 sets the display position of the virtual image calculated in step S104 as the obstacle. The position is changed to a position before the object, and the changed position is determined as a virtual image display position (step S110).
  • the display device 10 of the present embodiment sets the display position of the virtual image at a position in front of the obstacle such as the vehicle ahead and displays the virtual image.
  • the own vehicle position acquisition unit 11 acquires position information indicating the current position of the own vehicle (step S201).
  • the front detection unit 12 detects an obstacle (for example, a front vehicle) within a predetermined distance in front of the host vehicle (step S202).
  • an obstacle for example, a front vehicle
  • the relative position calculation unit 13 determines whether there is an obstacle such as a forward vehicle in the range (step S203). If it determines with there being no obstruction (step S203: No), it will progress to step S206.
  • the relative position calculation unit 13 calculates the relative position between the detected obstacle and the host vehicle (step S204).
  • the virtual image position setting unit 14 determines whether or not the detected obstacle overlaps with the display position of the currently displayed virtual image and is positioned in front of the display position of the virtual image (step S205).
  • step S205 If it is determined that there is no obstacle at a position that overlaps the display position of the virtual image and is in front of the display position (step S205: No), the display device 10 continues to display the virtual image at the currently displayed position (step S205). S206).
  • step S205 determines that there is an obstacle that overlaps the display position of the virtual image and is in front of the display position.
  • the display device 10 stops displaying the virtual image at the currently displayed position.
  • Step S207 The virtual image position setting unit 14 resets (changes) the display position of the virtual image so that the virtual image position setting unit 14 is positioned in front of the obstacle detected in step S202.
  • the display device 10 displays a virtual image at the reset display position (step S208).
  • the display device 10 of the present embodiment changes the display position of the virtual image in the depth direction so that the display position of the virtual image is in front of the obstacle such as the forward vehicle.
  • the display position is changed by switching a screen on which a virtual image is projected between S1, S2, and S3.
  • FIGS. 4 and 5 show examples of virtual image display when the vehicle ahead is present as an obstacle at a position that overlaps with the virtual image display position before and after the change of the virtual image display position (FIG. 4).
  • FIG. 6 is a diagram showing a comparison with FIG.
  • 4 (a) and 5 (a) schematically show the positions of the host vehicle and the forward vehicle as viewed from above in the y-axis direction (that is, the height direction), and the display position of the virtual image.
  • 4B and 5B show the display positions of the forward vehicle and the virtual image viewed from the z-axis direction (that is, the depth direction) that is the direction viewed from an observer such as a driver of the own vehicle. .
  • the position where the distance from the front of the host vehicle is 7 m is the display area V1
  • the position of 4m is the display area V2
  • the position of 2m is the display area V3.
  • FV the position where the distance from the front of the host vehicle is 7 m
  • the position of 4m is the display area V2
  • the position of 2m is the display area V3.
  • FV the position where the distance from the front of the host vehicle is 7 m
  • the position of 4m is the display area V2
  • the position of 2m is the display area V3.
  • the display device 10 displays an arrow for guiding the left turn as a virtual image.
  • the virtual image position setting unit 14 calculates a position close to the intersection that is the object of the left turn as the display position of the virtual image. Accordingly, the display area V1 (7 m position) is selected as the display position of the virtual image.
  • the display position of the virtual image overlaps the vehicle body position of the front vehicle FV. Therefore, as shown in FIG. 4B, when viewed from the observer (for example, the driver of the own vehicle OV), the virtual image is displayed in a state where the virtual image is embedded in the vehicle body of the front vehicle FV.
  • the display region V2 position of 4 m positioned before the rearmost part of the forward vehicle FV is selected as the display position of the virtual image. Is done.
  • the virtual image is not displayed at a position where it is recessed into the vehicle body of the preceding vehicle, but is displayed at a position in front of it that does not overlap the vehicle body of the preceding vehicle, as shown in FIG.
  • the display device 10 when the display device 10 according to the present embodiment detects an object that overlaps the displayed virtual image in the landscape and is positioned in front of the virtual image, the display device 10 displays the virtual image at the currently displayed position. It stops and displays a virtual image at a position in front of the object.
  • the display device 10 of the present embodiment detects an object that overlaps the display position of the virtual image set according to the display content of the virtual image in the landscape and is positioned in front of the display position, The position in front of is reset as the virtual image display position.
  • the display apparatus 10 performs these processes for every guidance target object or target object (for example, a store or an intersection). Therefore, for example, when the guidance object or the alert object changes with the movement of the host vehicle, these processes are repeated.
  • the virtual image is displayed in front of the obstacle such as the vehicle ahead, so that the observer (for example, the driver of the own vehicle) can visually recognize the virtual image without feeling discomfort due to the virtual image and the obstacle appearing to overlap each other. can do. Therefore, according to the display device 10 of the present embodiment, it is possible to display a virtual image without a sense of incongruity even when an obstacle exists in the landscape.
  • the display device of Example 2 will be described.
  • the display device of the present embodiment has the same device configuration as that of the display device 10 of the first embodiment shown in FIG. That is, the display device 10 according to the present embodiment includes the light source 20, the screen group 30, the reflection member 40, the image data generation unit 50, the drive unit 60, and the control unit 70.
  • control unit 70 of the display device 10 of the present embodiment has the same configuration as the control unit 70 of the first embodiment shown in FIG. That is, the control unit 70 of the present embodiment includes a host vehicle position detection unit 11, a front detection unit 12, a relative position calculation unit 13, a virtual image position setting unit 14, and a storage unit 15.
  • the front detector 12 is composed of a measuring device capable of measuring a three-dimensional distance with high accuracy.
  • the front detection unit 12 of the present embodiment is configured by a LiDAR (Light Detection and Ranging) device that measures the distance to the object by irradiating the object with laser light.
  • LiDAR Light Detection and Ranging
  • the virtual image when the display position of the virtual image overlaps with the position of an obstacle such as a forward vehicle, and the display position of a part of the virtual image is at the back side of the surface of the obstacle, the virtual image The display mode of a part of is changed.
  • the image data generation unit 50 instead of the virtual image position setting unit 14 changing the display position of the entire virtual image as in the first embodiment, the image data generation unit 50 is positioned behind the obstacle in the virtual image overlapping the obstacle. New image data VD for changing the display mode of the portion to be generated is generated.
  • the image data generation unit 50 displays a display mode in which the display mode of the part displayed on the back side of the obstacle is less conspicuous than the display mode (referred to as a standard mode) of the part displayed in front of the obstacle.
  • the virtual image data VD is generated so as to obtain a non-standard mode.
  • An inconspicuous display mode is, for example, a display mode with lower brightness than the standard mode, or a display mode in which the area of the image (the area to be drawn) is reduced by displaying only the outline of the virtual image with a broken line or the like It is.
  • the observer for example, the driver of the own vehicle OV
  • seems to partially overlap the virtual image with the obstacle that is, a part of the virtual image.
  • the incongruity caused by the fact that the object appears to be embedded in the obstacle is alleviated.
  • the own vehicle position acquisition unit 11 acquires position information indicating the current position of the own vehicle (step S101).
  • the relative position calculation unit 13 refers to the map information around the host vehicle read from the storage unit 15 (step S102), and calculates the relative position between the host vehicle and the guidance object (for example, an intersection or a store) (step S103). ).
  • the virtual image position setting unit 14 calculates the display position of the virtual image such that the guidance object and the virtual image overlap in the landscape based on the relative position calculated by the relative position calculation unit 13 (step S104).
  • the front detection unit 12 detects an obstacle (for example, a front vehicle) within a predetermined distance range in front of the host vehicle (step S105).
  • an obstacle for example, a front vehicle
  • the relative position calculation unit 13 determines whether there is an obstacle such as a forward vehicle in the range (step S106). If it determines with there being no obstruction (step S106: No), it will progress to step S303.
  • step S106 If it is determined that there is an obstacle (step S106: Yes), the relative position calculation unit 13 calculates the distance to each part of the obstacle (step S301).
  • the image data generation unit 50 calculates the distance to each part of the obstacle calculated by the relative position calculation unit 13, the virtual image display position in the depth direction calculated by the virtual image position setting unit 14 in step S104, and the image being generated Based on the data VD, it is determined whether or not the surface of the obstacle is positioned in front of a part of the display position of the virtual image (step S302).
  • the virtual image position setting unit 14 displays the display position calculated in step S104 as a virtual image.
  • the image data generation unit 50 generates image data VD for displaying the entire virtual image in a standard manner.
  • the drive unit 60 drives the light source 20 based on the image data VD. As a result, the virtual image is displayed in a standard manner with the position close to the guidance object as the display position of the virtual image (step S303).
  • the virtual image position setting unit 14 uses the display position calculated in step S104 as the virtual image. Set to the display position.
  • the image data generation unit 50 displays, in a non-standard manner, a portion of the virtual image that is located on the back side of the obstacle surface (that is, the back side of the obstacle on the side closer to the vehicle).
  • the image data VD for displaying the part in the standard mode is generated.
  • the drive unit 60 drives the light source 20 based on the image data VD.
  • the position near the guidance object is set as the virtual image display position, and the portion of the virtual image that is displayed in front of the surface of the obstacle is displayed in a standard manner, and is displayed behind the surface of the obstacle.
  • the part is displayed in a non-standard manner (step S304).
  • the display device 10 displays the portion of the virtual image displayed overlapping the obstacle, which is displayed on the back side of the obstacle surface, in front of the obstacle surface. Display in a display mode (non-standard mode) that is less conspicuous than the display mode of the part.
  • FIGS. 7A and 7B are diagrams illustrating display examples of a virtual image when a forward vehicle as an obstacle is present at a position partially overlapping with the display position of the virtual image.
  • FIG. 7A schematically shows the positions of the host vehicle and the preceding vehicle and the display position of the virtual image viewed from the x-axis direction (that is, the lateral direction).
  • FIG. 7B shows the display position of the front vehicle and the virtual image in the z-axis direction (that is, the depth direction) that is the direction seen from the observer such as the driver of the own vehicle.
  • the display device 10 displays the lower part of the virtual image in an inconspicuous manner (non-standard manner). Accordingly, as shown in FIG. 7B, when viewed from an observer (for example, a driver of the own vehicle OV), only the outline of the lower portion of the virtual image is displayed with a broken line, for example.
  • FIG. 8 is a diagram schematically showing a configuration of a display device 10A according to a modified example having such a configuration.
  • the display device 10A is different from the display device 10 in FIG.
  • the screen group 30 of the display device 10A includes screens S1A, S2A, and S3A.
  • the screens S1A, S2A, and S3A are configured as state-variable screens that change the state between a transmission state and a scattering state, like the screens S1 to S3 of the display device 10.
  • the screens S1A, S2A, and S3A are arranged in parallel to each other so that the normal line is inclined with respect to the optical axis of the laser light emitted from the light source 20 toward the reflecting member 40. Further, the distances from the upper ends of the screens S1A, S2A, and S3A to the reflecting member 40 are larger than the distances from the respective lower ends to the reflecting member 40.
  • the virtual images V1A to V3A corresponding to the respective screens have their upper ends on the back side (that is, An observer visually recognizes a virtual image located in the front in the z-axis direction, that is, a virtual image whose upper end side is inclined forward.
  • the display device 10A performs a virtual image display process by the processing operation shown in the flowchart of FIG.
  • the image data generation unit 50 of the present modification corresponds to the vertical position of the image data VD and the virtual image display position in the depth direction for each virtual image display position in the depth direction that can be set by the virtual image position setting unit 14 in step S104.
  • the relationship is stored in advance.
  • the image data generation unit 50 stores the distance to each part of the obstacle calculated by the relative position calculation unit 13, the virtual image display position in the depth direction calculated by the virtual image position setting unit 14, and the stored image.
  • the determination in step S302 is performed based on the correspondence between the vertical position of the data VD and the virtual image display position in the depth direction.
  • the obstacle in the virtual image is determined in step S304.
  • a portion on the near side of the surface is displayed in a standard manner, a portion on the far side is displayed in a non-standard manner, and a virtual image with the upper end inclined forward is displayed.
  • FIGS. 9A and 9B are diagrams illustrating a display example of a virtual image by the display device 10 ⁇ / b> A of the present modification example when the vehicle ahead is present as an obstacle at a position that partially overlaps the display position of the virtual image.
  • FIG. 9A schematically shows the positions of the host vehicle and the preceding vehicle and the display position of the virtual image viewed from the x-axis direction (that is, the lateral direction).
  • FIG. 9B shows the display position of the front vehicle and the virtual image in the z-axis direction (that is, the depth direction) that is the direction seen from the observer such as the driver of the own vehicle.
  • the virtual image is displayed with the upper end side tilted forward, and therefore when the virtual image is displayed behind the rearmost part of the forward vehicle FV when viewed from the host vehicle OV, for example, a virtual image. Is displayed in a mode (non-standard mode) in which the upper part of the screen is not conspicuous.
  • FIG. 9B when viewed from an observer (for example, the driver of the own vehicle OV), for example, only the outline of the upper portion of the virtual image is displayed by a broken line.
  • an inconspicuous display mode non-standard mode
  • the part may not be displayed (that is, the display of the part is stopped).
  • the display mode of the virtual image of the portion displayed behind the surface of the obstacle may be different from the display mode of the virtual image of the portion displayed in front so as not to hinder the forward visual recognition by the driver).
  • the virtual image of the portion displayed behind the obstacle such as the vehicle ahead of the virtual image is displayed in an inconspicuous manner, or the virtual image of the portion is not displayed.
  • the observer for example, the driver of the own vehicle OV
  • the display device of the present embodiment it is possible to perform virtual image display with reduced discomfort even when there are obstacles in the landscape.
  • the embodiments of the present invention are not limited to those shown in the above examples.
  • the display position of the virtual image is at a position in front of the obstacle.
  • the display of the virtual image AR display
  • the display of the virtual image AR display
  • a virtual image in which the display form is changed may be displayed instead of the same virtual image as that before the change.
  • change the position of the virtual image in the plane direction x-axis direction or y-axis direction
  • the display device 10 has the screens S1 to S3 and displays virtual images in the display areas V1 to V3 having different distances in the depth direction.
  • the number of screens and display areas is not limited to this, and it is possible to have two or four or more screens and display a virtual image in two or four or more display areas.
  • the structure which displays a virtual image by the multi-view and the three-dimensional display by holography may be sufficient.
  • the display position in the depth direction of the virtual image is changed by switching the screen on which the virtual image is projected to the screens S1 to S3 has been described.
  • the display position of the virtual image may be changed by moving the screen position.
  • SYMBOLS 10 Display apparatus 11 Own vehicle position acquisition part 12 Front detection part 13 Relative position calculation part 14 Virtual image position setting part 15 Storage part 20 Light source 30 Screen group S1, S2, S3 Screen 40 Reflective member 50 Image data generation part 60 Drive part 70 Control Part 10A Display device S1A, S2A, S3A Screen

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Devices For Indicating Variable Information By Combining Individual Elements (AREA)
  • Navigation (AREA)
  • Instrument Panels (AREA)

Abstract

A display device includes a display unit that presents a virtual image at at least one position of a plurality of positions having mutually different depths in a scene, and an object position acquisition unit for acquiring the position of an object in the scene. The display device stops presenting one of the virtual images when the position of the object acquired by the object position acquisition unit overlaps the virtual image currently presented and is closer than the virtual image, or presents the virtual image in a manner that alleviates an observer's discomfort.

Description

表示装置、表示方法及びプログラムDisplay device, display method, and program
 本発明は、表示装置、表示方法及びプログラムに関する。 The present invention relates to a display device, a display method, and a program.
 近年、自動車等の移動体において、前方の風景に虚像を重畳するAR(Augmented Reality)表示を用いた案内表示が行われている。例えば、かかる案内表示では、店舗、右左折等の案内や通行人の存在についての注意喚起等を表す虚像が、案内や注意喚起の対象物の近くに表示される。このような案内表示の表示システムとして、車両前方の対象物を検知し、車両の移動に応じて虚像を動かすことにより、常に対象物に重なるように虚像を表示させる表示システムが提案されている(例えば、特許文献1)。 In recent years, guidance display using AR (Augmented Reality) display that superimposes a virtual image on the scenery in front has been performed on a moving body such as an automobile. For example, in such guidance display, a virtual image representing guidance such as a store, turning left or right, and alerting about the presence of a passerby is displayed near the target object of guidance or alerting. As a display system for such guidance display, a display system has been proposed in which an object in front of the vehicle is detected and the virtual image is displayed so as to always overlap the object by moving the virtual image according to the movement of the vehicle ( For example, Patent Document 1).
特開2016-185768号公報JP 2016-185768 A
 上記した従来技術では、虚像の表示位置に前方車両等の障害物の位置が重なった場合、ドライバからは虚像が障害物にめり込んでいるように見える。従って、本来は何も見えない筈の位置に情報が見えるため、ドライバが違和感を覚えるという問題が課題の一例として挙げられる。 In the above-described prior art, when the position of an obstacle such as a forward vehicle overlaps the display position of the virtual image, it appears to the driver that the virtual image is embedded in the obstacle. Therefore, since the information can be seen at the position of the eyelid where nothing can be seen, the problem that the driver feels uncomfortable is an example of the problem.
 また、虚像を注視することで目の焦点がその位置に合うため、虚像より手前の車両後部の位置が把握しにくい場合があり、注意喚起等を示す虚像が安全運転支援のための効力を低下させているという問題が課題の一例として挙げられる。 In addition, since the focus of the eyes is adjusted to the position by gazing at the virtual image, it may be difficult to grasp the position of the rear part of the vehicle in front of the virtual image, and the virtual image indicating warning etc. reduces the effectiveness for assisting safe driving The problem of making it an example is an example of a problem.
 本発明は、上記した点に鑑みてなされたものであり、風景内に障害物が存在する場合にも違和感を低減した虚像表示を行うことが可能な表示装置を提供することを目的の一つとする。 The present invention has been made in view of the above points, and an object of the present invention is to provide a display device capable of performing virtual image display with reduced discomfort even when an obstacle exists in the landscape. To do.
 請求項1に記載の発明は、表示装置であって、風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示部と、前記風景内の物体の位置を取得する物体位置取得部と、を含み、前記表示部は、前記物体位置取得部が取得した物体の位置が現在表示している1の虚像と重なりかつ前記1の虚像よりも手前に位置する場合に、前記1の虚像の表示を停止することを特徴とする。 The invention according to claim 1 is a display device, wherein a display unit that displays a virtual image at at least one of a plurality of positions having different depths in a landscape, and a position of an object in the landscape are acquired. An object position acquisition unit, and the display unit overlaps one virtual image currently displayed by the object position acquired by the object position acquisition unit and is positioned in front of the one virtual image. The display of the virtual image of 1 is stopped.
 請求項5に記載の発明は、虚像を表示可能な表示装置が実行する表示方法であって、風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、前記風景内の物体の位置を取得する物体位置取得ステップと、前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像と重なりかつ前記1の虚像よりも手前に位置する場合に、前記1の虚像の表示を停止する停止ステップと、を実行することを特徴とする。 The invention according to claim 5 is a display method executed by a display device capable of displaying a virtual image, the display step displaying a virtual image at at least one position among a plurality of positions having different depths in a landscape, An object position acquisition step for acquiring the position of the object in the landscape, and the position of the object acquired in the object position acquisition step overlaps with the currently displayed one virtual image and is positioned in front of the one virtual image. And a stop step of stopping the display of the first virtual image.
 請求項6に記載の発明は、プログラムであって、虚像を表示可能な表示装置が有するコンピュータに、風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、前記風景内の物体の位置を取得する物体位置取得ステップと、前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像と重なりかつ前記1の虚像よりも手前に位置する場合に、前記1の虚像の表示を停止する停止ステップと、を実行させることを特徴とする。 The invention according to claim 6 is a program, a display step for displaying a virtual image at at least one position among a plurality of positions having different depths in a landscape on a computer having a display device capable of displaying a virtual image. An object position acquisition step for acquiring the position of an object in the landscape, and the position of the object acquired in the object position acquisition step overlaps one virtual image currently displayed and is positioned in front of the one virtual image. In this case, a stop step of stopping the display of the virtual image is executed.
 請求項7に記載の発明は、表示装置であって、風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示部と、前記風景内の物体の位置を取得する物体位置取得部と、を含み、前記表示部は、前記物体位置取得部が取得した物体の位置が現在表示している1の虚像の一部と前記風景内の奥行方向において重なる場合に、前記1の虚像のうち前記物体の表面よりも奥に位置する部分を前記物体の表面より手前に位置する部分よりも目立たない表示態様で表示し、又は前記1の虚像のうち前記物体の表面よりも奥に位置する部分の表示を停止することを特徴とする。 The invention according to claim 7 is a display device, wherein the display unit displays a virtual image at at least one position among a plurality of positions having different depths in the landscape, and acquires the position of the object in the landscape. An object position acquisition unit, and the display unit, when the position of the object acquired by the object position acquisition unit overlaps a part of one virtual image currently displayed in the depth direction in the landscape, A portion of the virtual image of one that is located behind the surface of the object is displayed in a display mode that is less conspicuous than a portion that is positioned in front of the surface of the object, or of the virtual image of the one than the surface of the object The display of the part located in the back is stopped.
 請求項8に記載の発明は、虚像を表示可能な表示装置が実行する表示方法であって、風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、前記風景内の物体の位置を取得する物体位置取得ステップと、前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像の一部と前記風景内の奥行方向において重なる場合に、前記1の虚像のうち前記物体の表面よりも奥に位置する部分を前記物体の表面より手前に位置する部分よりも目立たない表示態様で表示し、又は前記1の虚像のうち前記物体の表面よりも奥に位置する部分の表示を停止する表示変更ステップと、を実行することを特徴とする。 The invention according to claim 8 is a display method executed by a display device capable of displaying a virtual image, wherein the display step displays the virtual image at at least one of a plurality of positions having different depths in the landscape, An object position acquisition step for acquiring the position of an object in the landscape, and a case where the position of the object acquired in the object position acquisition step overlaps a part of one currently displayed virtual image in the depth direction in the landscape. The part of the virtual image of the one that is located behind the surface of the object is displayed in a display mode that is less conspicuous than the part of the virtual image that is located in front of the surface of the object, or the surface of the object of the virtual image of the one And a display changing step for stopping the display of the portion located further back.
 請求項9に記載の発明は、プログラムであって、虚像を表示可能な表示装置が有するコンピュータに、風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、前記風景内の物体の位置を取得する物体位置取得ステップと、前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像の一部と前記風景内の奥行方向において重なる場合に、前記1の虚像のうち前記物体の表面よりも奥に位置する部分を前記物体の表面より手前に位置する部分よりも目立たない表示態様で表示し、又は前記1の虚像のうち前記物体の表面よりも奥に位置する部分の表示を停止する表示変更ステップと、を実行させることを特徴とする。 The invention according to claim 9 is a program, a display step of displaying a virtual image at at least one position among a plurality of positions having different depths in a landscape on a computer having a display device capable of displaying a virtual image. An object position acquisition step for acquiring the position of an object in the landscape, and a position of the object acquired in the object position acquisition step overlaps a part of one virtual image currently displayed in the depth direction in the landscape. The portion of the one virtual image that is located behind the surface of the object is displayed in a display mode that is less conspicuous than the portion that is located in front of the surface of the object. And a display changing step of stopping display of a portion located deeper than the front surface.
本実施例の表示装置の構成を示すブロック図である。It is a block diagram which shows the structure of the display apparatus of a present Example. 虚像の表示位置決定処理のルーチンを示すフローチャートである。It is a flowchart which shows the routine of the display position determination process of a virtual image. 虚像の表示位置決定処理のルーチンを示すフローチャートである。It is a flowchart which shows the routine of the display position determination process of a virtual image. 表示位置の変更を行う前の虚像の表示位置の例を示す図である。It is a figure which shows the example of the display position of the virtual image before changing a display position. 表示位置の変更を行った後の虚像の表示位置の例を示す図である。It is a figure which shows the example of the display position of the virtual image after changing a display position. 実施例2の虚像表示処理のルーチンを示すフローチャートである。12 is a flowchart illustrating a virtual image display processing routine according to the second embodiment. 実施例2における虚像の表示位置の例を示す図である。6 is a diagram illustrating an example of a display position of a virtual image in Embodiment 2. FIG. 実施例2の変形例の表示装置の構成を示すブロック図である。FIG. 10 is a block diagram illustrating a configuration of a display device according to a modification example of Example 2. 実施例2の変形例における虚像の表示位置の例を示す図である。10 is a diagram illustrating an example of a display position of a virtual image in a modification example of Example 2. FIG.
 以下、本発明の実施例について、図面を参照して説明する。なお、以下の実施例における説明及び添付図面においては、実質的に同一又は等価な部分には同一の参照符号を付している。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following description of the embodiments and the accompanying drawings, substantially the same or equivalent parts are denoted by the same reference numerals.
 図1(a)は、本実施例の表示装置10の構成を模式的に示す図である。表示装置10は、風景に重畳して虚像を表示させる表示装置であり、本実施例では車両等の移動体に搭載されたヘッドアップディスプレイとして構成されている。以下の説明では、表示装置10を搭載する車両を「自車」と称する。 FIG. 1A is a diagram schematically showing the configuration of the display device 10 of this embodiment. The display device 10 is a display device that displays a virtual image superimposed on a landscape. In the present embodiment, the display device 10 is configured as a head-up display mounted on a moving body such as a vehicle. In the following description, a vehicle equipped with the display device 10 is referred to as “own vehicle”.
 本実施例では、表示装置10が自車前方の風景内に表示させる虚像の表示領域を表示領域V1~V3(図中、破線で示す。これらをまとめて虚像表示領域IVと称する)とし、虚像を観察する観察者(例えば、自車のドライバ)の目の位置を視点EYとする。また、虚像を観察する観察者の視線に沿った方向をz軸方向、z軸方向に垂直で且つ互いに垂直な方向をx軸方向及びy軸方向とする。x軸方向を横方向、y軸方向を高さ方向、z軸方向を奥行方向とも称する。z軸方向の視点EYから虚像表示領域IVに向かう方向を前方とも称する。 In this embodiment, the display area of the virtual image that the display device 10 displays in the scenery in front of the host vehicle is referred to as display areas V1 to V3 (shown by broken lines in the figure. These are collectively referred to as a virtual image display area IV). The eye position of an observer who observes (for example, the driver of the own vehicle) is defined as a viewpoint EY. In addition, a direction along the line of sight of an observer who observes a virtual image is a z-axis direction, and a direction perpendicular to the z-axis direction and perpendicular to each other is an x-axis direction and a y-axis direction. The x-axis direction is also referred to as the horizontal direction, the y-axis direction as the height direction, and the z-axis direction as the depth direction. The direction from the viewpoint EY in the z-axis direction toward the virtual image display area IV is also referred to as the front.
 表示装置10は、光源20、スクリーン群30、反射部材40、画像データ生成部50及び駆動部60を含む。 The display device 10 includes a light source 20, a screen group 30, a reflection member 40, an image data generation unit 50, and a drive unit 60.
 光源20は、虚像表示領域IVに虚像を表示するための光を照射する光源であり、例えばレーザ光を照射する走査型のレーザプロジェクタから構成されている。光源20は、レーザ光の出射点としての出射部EPを有し、出射部EPから出射された出射光L1を所定の照射領域に照射する。 The light source 20 is a light source that irradiates light for displaying a virtual image in the virtual image display region IV, and includes, for example, a scanning laser projector that irradiates laser light. The light source 20 has an emission part EP as an emission point of the laser light, and irradiates a predetermined irradiation area with the emission light L1 emitted from the emission part EP.
 スクリーン群30は、スクリーンS1、S2及びS3を含む。スクリーンS1~S3の各々は、平板形状を有し、光源20からの出射光L1を透過させる透過状態と出射光L1を散乱させる散乱状態とに状態を変化させる状態可変型のスクリーンである。散乱状態では出射光L1を散乱させるため、スクリーンは表示状態となる。一方、透過状態では出射光L1を散乱させずに通過させるため、スクリーンは非表示状態となる。スクリーンS1~S3は、例えば、液晶分子を含む液晶層と液晶層内の液晶分子の状態を切り替える電極層とが積層された液晶フィルムや、マイクロレンズを含む透光板等から構成されている。 The screen group 30 includes screens S1, S2, and S3. Each of the screens S1 to S3 has a flat plate shape, and is a state variable type screen that changes the state between a transmission state in which the outgoing light L1 from the light source 20 is transmitted and a scattering state in which the outgoing light L1 is scattered. Since the outgoing light L1 is scattered in the scattering state, the screen is in the display state. On the other hand, since the outgoing light L1 is allowed to pass through without being scattered in the transmissive state, the screen is not displayed. The screens S1 to S3 are composed of, for example, a liquid crystal film in which a liquid crystal layer containing liquid crystal molecules and an electrode layer for switching the state of liquid crystal molecules in the liquid crystal layer are laminated, a translucent plate containing microlenses, and the like.
 スクリーンS1~S3は、いずれも出射光L1の照射領域内に配され、出射光L1の照射を受けて虚像を投影する投影領域(図示せず)を有する。スクリーンS1~S3は、光源20の出射部EPから投影領域までの距離が互いに異なる距離となるように配置されている。 The screens S1 to S3 are all disposed in the irradiation region of the emitted light L1, and have a projection region (not shown) that projects a virtual image upon receiving the irradiation of the emitted light L1. The screens S1 to S3 are arranged such that the distance from the emission part EP of the light source 20 to the projection area is different from each other.
 反射部材40は、スクリーンS1~S3の投影領域に照射された被照射光である投影光L2を反射する反射部材であり、スクリーンS1~S3の全てを通る直線上に配置されている。反射部材40は、例えば可視光に対して透光性を有するイメージコンバイナから構成されている。 The reflecting member 40 is a reflecting member that reflects the projection light L2 that is the irradiated light irradiated on the projection areas of the screens S1 to S3, and is disposed on a straight line that passes through all of the screens S1 to S3. The reflecting member 40 is composed of, for example, an image combiner having translucency with respect to visible light.
 反射部材40は、スクリーンS1~S3の投影領域に対向して配置された凹面部41を有する。凹面部41は、投影光L2に対して凹面鏡として機能する。 The reflection member 40 has a concave surface portion 41 arranged to face the projection areas of the screens S1 to S3. The concave surface portion 41 functions as a concave mirror for the projection light L2.
 光源20から出射された出射光L1は、スクリーンS1~S3に入射される。スクリーンS1~S3に入射された光は、投影光L2として、出射光L1の照射面の反対側の表面から、各スクリーンの当該反対側の表面に垂直な方向に出射される。そして、投影光L2は、反射部材40に向かって進む。 The outgoing light L1 emitted from the light source 20 is incident on the screens S1 to S3. The light incident on the screens S1 to S3 is emitted as projection light L2 from the surface opposite to the irradiation surface of the emitted light L1 in a direction perpendicular to the opposite surface of each screen. Then, the projection light L2 travels toward the reflecting member 40.
 観察者は、視点EYから反射部材40を観察すると、反射部材40の奥側(z軸方向における前方)に位置する虚像表示領域IVに虚像を視認することができる。具体的には、観察者は、それぞれスクリーンS1、S2及びS3と反射部材40との間の距離に応じて奥行方向の距離が互いに異なる表示領域V1、V2及びV3に表示された虚像を、反射部材40越しに視認することとなる。 When the observer observes the reflecting member 40 from the viewpoint EY, the observer can visually recognize the virtual image in the virtual image display area IV located on the back side (forward in the z-axis direction) of the reflecting member 40. Specifically, the observer reflects the virtual images displayed in the display areas V1, V2, and V3 having different distances in the depth direction according to the distances between the screens S1, S2, and S3 and the reflecting member 40, respectively. It will be visually recognized through the member 40.
 画像データ生成部50は、虚像表示領域IVに虚像を表示させるためにスクリーンS1~S3に投影する画像データVDを生成する。画像データ生成部50は、自車周辺の情報を示す周辺情報や地図情報に基づいて画像データVDを生成し、駆動部60及び制御部70に供給する。 The image data generation unit 50 generates image data VD to be projected on the screens S1 to S3 in order to display a virtual image in the virtual image display area IV. The image data generation unit 50 generates image data VD based on the peripheral information indicating the surrounding information of the vehicle and the map information, and supplies the image data VD to the drive unit 60 and the control unit 70.
 駆動部60は、光源20及びスクリーン群30を駆動する駆動部である。駆動部60は、画像データ生成部50が生成した画像データVDに基づいて、光源20を駆動する光源駆動信号DS1を生成し、光源20に供給する。光源20は、光源駆動信号DS1に応じて画像データVDに対応する出射光L1を生成し、照射領域に照射する。 The drive unit 60 is a drive unit that drives the light source 20 and the screen group 30. The drive unit 60 generates a light source drive signal DS1 for driving the light source 20 based on the image data VD generated by the image data generation unit 50, and supplies the light source drive signal DS1 to the light source 20. The light source 20 generates emission light L1 corresponding to the image data VD according to the light source drive signal DS1, and irradiates the irradiation area.
 駆動部60は、制御部70による制御に応じて、スクリーン駆動信号DS2を生成し、スクリーン群30に供給する。スクリーン群30のスクリーンS1~S3の各々は、スクリーン駆動信号DS2によって、その透過状態及び散乱状態が切り替わる。 The driving unit 60 generates a screen driving signal DS2 according to control by the control unit 70 and supplies the screen driving signal DS2 to the screen group 30. Each of the screens S1 to S3 of the screen group 30 is switched between a transmission state and a scattering state by the screen drive signal DS2.
 制御部70は、画像データVDに基づいて虚像の表示位置を決定し、決定した表示位置に虚像を表示させるべく、駆動部60によるスクリーンS1~S3の駆動を制御する。制御部70は、駆動部60によるスクリーンS1~S3の駆動を制御する制御信号CSを生成し、駆動部60に供給する。 The control unit 70 determines the display position of the virtual image based on the image data VD, and controls the driving of the screens S1 to S3 by the driving unit 60 so that the virtual image is displayed at the determined display position. The control unit 70 generates a control signal CS for controlling the driving of the screens S 1 to S 3 by the driving unit 60 and supplies the control signal CS to the driving unit 60.
 図1(b)は、制御部70の構成を示すブロック図である。制御部70は、自車位置検出部11、前方検出部12、相対位置算出部13、虚像位置設定部14及び記憶部15を含む。 FIG. 1B is a block diagram illustrating a configuration of the control unit 70. The control unit 70 includes a host vehicle position detection unit 11, a front detection unit 12, a relative position calculation unit 13, a virtual image position setting unit 14, and a storage unit 15.
 自車位置取得部11は、例えばGPS(Global Positioning System)装置から構成され、複数のGPS衛星から送信された電波を受信し、受信した電波に基づいて各GPS衛星からの距離を算出することにより、自車の位置情報を取得する。 The own vehicle position acquisition unit 11 is composed of, for example, a GPS (Global Positioning System) device, receives radio waves transmitted from a plurality of GPS satellites, and calculates a distance from each GPS satellite based on the received radio waves. Get the location information of the vehicle.
 前方検出部12は、自車の前方における所定距離の範囲内に存在する物体及びその位置を検出する。前方検出部12は、例えばパルスレーダー等のレーダー装置から構成され、自車の前方に電波を発射して物体を探知し、物体までの距離及び方位を測定する。なお、前方検出部12は、レーダーから構成される場合に限定されない。例えば、前方検出部12は、自車の前方を撮影するカメラを含み、画像認識に基づいて物体の存在及び位置を検出するものであっても良い。 The front detection unit 12 detects an object existing within a predetermined distance in front of the host vehicle and its position. The front detection unit 12 is constituted by a radar device such as a pulse radar, for example, emits radio waves in front of the host vehicle, detects an object, and measures the distance and direction to the object. In addition, the front detection part 12 is not limited to when comprised from a radar. For example, the front detection unit 12 may include a camera that captures the front of the host vehicle, and may detect the presence and position of an object based on image recognition.
 相対位置算出部13は、自車位置取得部11が取得した自車の位置情報、自車の周辺領域を含む地図情報、及び前方検出部12が検出した物体の情報に基づいて、虚像表示の案内対象である案内対象物又は虚像表示による注意喚起の対象物(以下、これらを単に対象物とも総称する)と自車との相対位置を算出する。また、相対位置算出部13は、虚像表示の対象物だけでなく、自車の前方に存在する他の物体(例えば、前方車両)と自車との相対位置を算出する。すなわち、相対位置算出部13は、前方風景内の物体の自車との相対位置を取得する。 The relative position calculation unit 13 performs virtual image display based on the position information of the vehicle acquired by the vehicle position acquisition unit 11, the map information including the surrounding area of the vehicle, and the information of the object detected by the front detection unit 12. The relative position between the guidance object that is the guidance object or the object of alerting by virtual image display (hereinafter simply referred to as the object) and the own vehicle is calculated. In addition, the relative position calculation unit 13 calculates the relative position between the subject vehicle and the other object (for example, the forward vehicle) existing in front of the subject vehicle as well as the virtual image display target. That is, the relative position calculation unit 13 acquires the relative position of the object in the front landscape with the own vehicle.
 虚像位置設定部14は、画像データVDが示す画像内容及び自車と対象物との相対位置に基づいて、虚像の表示位置を設定する。虚像位置設定部14は、表示領域V1~V3のいずれを虚像の表示位置とするかをz軸方向(すなわち、奥行方向)の表示位置として設定する。また、虚像位置設定部14は、各表示領域内の平面方向であるx軸方向及びy軸方向の虚像の表示位置を設定する。 The virtual image position setting unit 14 sets the display position of the virtual image based on the image content indicated by the image data VD and the relative position between the vehicle and the object. The virtual image position setting unit 14 sets which of the display areas V1 to V3 is the virtual image display position as the display position in the z-axis direction (that is, the depth direction). Further, the virtual image position setting unit 14 sets display positions of virtual images in the x-axis direction and the y-axis direction that are plane directions in each display region.
 虚像位置設定部14は、原則として、案内対象物又は注意喚起の対象物に近い位置を虚像の表示位置として設定する。しかし、当該表示位置(すなわち、案内対象物又は注意喚起の対象物に近い位置として設定された虚像の表示位置)と重なり且つ手前の位置に他の物体(前方車両等の障害物)が検出された場合、虚像位置設定部14は、当該他の物体よりも手前の位置を虚像の表示位置として設定する。 As a general rule, the virtual image position setting unit 14 sets a position close to the guidance object or the alerting object as the display position of the virtual image. However, another object (an obstacle such as a forward vehicle) is detected at a position that overlaps with the display position (that is, the virtual image display position set as a position close to the guidance object or the alerting object). In this case, the virtual image position setting unit 14 sets a position in front of the other object as a virtual image display position.
 記憶部15は、制御部70の処理に必要なデータ及び処理において発生するデータを適宜記憶する情報保持部である。記憶部15は、ハードディスク、フラッシュメモリ、SSD(Solid State Drive)、RAM(Random Access Memory)等の記憶装置から構成されている。 The storage unit 15 is an information holding unit that appropriately stores data necessary for the processing of the control unit 70 and data generated in the processing. The storage unit 15 includes a storage device such as a hard disk, flash memory, SSD (Solid State Drive), and RAM (Random Access Memory).
 本実施例の表示装置10は、(1)これから新たに虚像の表示を行う場合、及び(2)既に虚像の表示を行っている状態で自車の前方の状況に変化が生じた場合の双方において、自車の前方に位置する物体(前方車両等)の位置に基づいて虚像の表示位置を設定又は再設定する。以下、(1)及び(2)の各々の場合について、本実施例の表示装置10が実行する虚像位置設定処理の処理動作を説明する。 The display device 10 according to the present embodiment is (1) both when a new virtual image is displayed and (2) when the situation ahead of the host vehicle is changed while the virtual image is already displayed. The virtual image display position is set or reset based on the position of an object (such as a forward vehicle) positioned in front of the vehicle. Hereinafter, in each case of (1) and (2), the processing operation of the virtual image position setting process executed by the display device 10 of the present embodiment will be described.
 まず、上記(1)の場合における虚像位置設定処理の処理動作について、図2のフローチャートを参照して説明する。なお、ここでは交差点における右左折等を指示する矢印や店舗等の場所を案内する内容の虚像を表示する場合を例として説明する。 First, the processing operation of the virtual image position setting process in the case of (1) will be described with reference to the flowchart of FIG. Here, a description will be given by taking as an example a case of displaying a virtual image of content that guides a place such as an arrow indicating a right or left turn at an intersection or a store.
 自車位置取得部11は、自車の現在位置を示す位置情報を取得する(ステップS101)。 The own vehicle position acquisition unit 11 acquires position information indicating the current position of the own vehicle (step S101).
 相対位置算出部13は、記憶部15から読み出した自車周辺の地図情報を参照し(ステップS102)、自車と案内対象物(例えば、交差点や店舗)との相対位置を算出する(ステップS103)。 The relative position calculation unit 13 refers to the map information around the host vehicle read from the storage unit 15 (step S102), and calculates the relative position between the host vehicle and the guidance object (for example, an intersection or a store) (step S103). ).
 虚像位置設定部14は、相対位置算出部13が算出した相対位置に基づいて、風景内において案内対象物と虚像とが重なるような虚像の表示位置を算出する(ステップS104)。 The virtual image position setting unit 14 calculates the display position of the virtual image such that the guidance object and the virtual image overlap in the landscape based on the relative position calculated by the relative position calculation unit 13 (step S104).
 前方検出部12は、自車の前方の所定距離の範囲内にある障害物(例えば、前方車両)を検出する(ステップS105)。 The front detection unit 12 detects an obstacle (for example, a front vehicle) within a predetermined distance range in front of the host vehicle (step S105).
 相対位置算出部13は、当該範囲内に前方車両等の障害物があるか否かを判定する(ステップS106)。障害物がないと判定すると(ステップS106:No)、ステップS109に進む。 The relative position calculation unit 13 determines whether there is an obstacle such as a forward vehicle in the range (step S106). If it determines with there being no obstruction (step S106: No), it will progress to step S109.
 障害物があると判定すると(ステップS106:Yes)、相対位置算出部13は、検出された障害物と自車との相対位置を算出する(ステップS107)。 If it is determined that there is an obstacle (step S106: Yes), the relative position calculation unit 13 calculates the relative position between the detected obstacle and the host vehicle (step S107).
 虚像位置設定部14は、検出された障害物が、ステップS104で算出した虚像の表示位置と重なりかつ当該虚像の表示位置よりも手前に位置するかどうかを判定する(ステップS108)。 The virtual image position setting unit 14 determines whether the detected obstacle overlaps the virtual image display position calculated in step S104 and is positioned in front of the virtual image display position (step S108).
 虚像の表示位置と重なりかつ当該表示位置よりも手前の位置に障害物がないと判定すると(ステップS108:No)、虚像位置設定部14は、ステップS104で算出した表示位置を虚像の表示位置として決定する(ステップS109)。 If it is determined that there is no obstacle at a position that overlaps the display position of the virtual image and is in front of the display position (step S108: No), the virtual image position setting unit 14 uses the display position calculated in step S104 as the display position of the virtual image. Determine (step S109).
 一方、虚像の表示位置と重なりかつ当該表示位置よりも手前の位置に障害物があると判定すると(ステップS108:Yes)、虚像位置設定部14は、ステップS104で算出した虚像の表示位置を障害物よりも手前の位置に変更し、変更した位置を虚像の表示位置として決定する(ステップS110)。 On the other hand, if it is determined that there is an obstacle at a position that overlaps the display position of the virtual image and is in front of the display position (step S108: Yes), the virtual image position setting unit 14 sets the display position of the virtual image calculated in step S104 as the obstacle. The position is changed to a position before the object, and the changed position is determined as a virtual image display position (step S110).
 以上の処理により、本実施例の表示装置10は、前方車両等の障害物よりも手前の位置に虚像の表示位置を設定し、虚像の表示を行う。 Through the above processing, the display device 10 of the present embodiment sets the display position of the virtual image at a position in front of the obstacle such as the vehicle ahead and displays the virtual image.
 次に、上記(2)の場合における虚像位置設定処理の処理動作について、図3のフローチャートを参照して説明する。なお、ここでは交差点における右左折等の矢印や店舗等の場所を案内する内容の虚像が表示されているものとする。 Next, the processing operation of the virtual image position setting process in the case of (2) will be described with reference to the flowchart of FIG. Here, it is assumed that an arrow such as a right or left turn at an intersection or a virtual image of contents for guiding a place such as a store is displayed.
 自車位置取得部11は、自車の現在位置を示す位置情報を取得する(ステップS201)。 The own vehicle position acquisition unit 11 acquires position information indicating the current position of the own vehicle (step S201).
 前方検出部12は、自車の前方の所定距離の範囲内にある障害物(例えば、前方車両)を検出する(ステップS202)。 The front detection unit 12 detects an obstacle (for example, a front vehicle) within a predetermined distance in front of the host vehicle (step S202).
 相対位置算出部13は、当該範囲内に前方車両等の障害物があるか否かを判定する(ステップS203)。障害物がないと判定すると(ステップS203:No)、ステップS206に進む。 The relative position calculation unit 13 determines whether there is an obstacle such as a forward vehicle in the range (step S203). If it determines with there being no obstruction (step S203: No), it will progress to step S206.
 障害物があると判定すると(ステップS203:Yes)、相対位置算出部13は、検出された障害物と自車との相対位置を算出する(ステップS204)。 If it is determined that there is an obstacle (step S203: Yes), the relative position calculation unit 13 calculates the relative position between the detected obstacle and the host vehicle (step S204).
 虚像位置設定部14は、検出された障害物が、現在表示している虚像の表示位置と重なりかつ当該虚像の表示位置よりも手前に位置するかどうかを判定する(ステップS205)。 The virtual image position setting unit 14 determines whether or not the detected obstacle overlaps with the display position of the currently displayed virtual image and is positioned in front of the display position of the virtual image (step S205).
 虚像の表示位置と重なりかつ当該表示位置よりも手前の位置に障害物がないと判定すると(ステップS205:No)、表示装置10は、現在表示している位置で虚像の表示を継続する(ステップS206)。 If it is determined that there is no obstacle at a position that overlaps the display position of the virtual image and is in front of the display position (step S205: No), the display device 10 continues to display the virtual image at the currently displayed position (step S205). S206).
 一方、虚像の表示位置と重なりかつ当該表示位置よりも手前の位置に障害物があると判定すると(ステップS205:Yes)、表示装置10は、現在表示している位置での虚像の表示を停止する(ステップS207)。虚像位置設定部14は、ステップS202で検出された障害物よりも手前の位置となるように虚像の表示位置を再設定(変更)する。表示装置10は、再設定された表示位置で虚像を表示する(ステップS208)。 On the other hand, if the display device 10 determines that there is an obstacle that overlaps the display position of the virtual image and is in front of the display position (step S205: Yes), the display device 10 stops displaying the virtual image at the currently displayed position. (Step S207). The virtual image position setting unit 14 resets (changes) the display position of the virtual image so that the virtual image position setting unit 14 is positioned in front of the obstacle detected in step S202. The display device 10 displays a virtual image at the reset display position (step S208).
 以上の処理により、本実施例の表示装置10は、虚像の表示位置が前方車両等の障害物よりも手前の位置となるように、奥行方向における虚像の表示位置を変更する。表示位置の変更は、例えば虚像を投影するスクリーンをS1、S2及びS3の間で切り替えることにより行う。 Through the above processing, the display device 10 of the present embodiment changes the display position of the virtual image in the depth direction so that the display position of the virtual image is in front of the obstacle such as the forward vehicle. For example, the display position is changed by switching a screen on which a virtual image is projected between S1, S2, and S3.
 図4及び図5は、虚像の表示位置と重なり且つ手前の位置に障害物としての前方車両が存在する場合の虚像の表示例を、虚像の表示位置の変更前(図4)と変更後(図5)とを比較して示す図である。 FIGS. 4 and 5 show examples of virtual image display when the vehicle ahead is present as an obstacle at a position that overlaps with the virtual image display position before and after the change of the virtual image display position (FIG. 4). FIG. 6 is a diagram showing a comparison with FIG.
 図4(a)及び図5(a)は、y軸方向(すなわち、高さ方向)から上面視した自車及び前方車両の位置と及び虚像の表示位置とを模式的に示している。図4(b)及び図5(b)は、自車のドライバ等の観察者から見た方向であるz軸方向(すなわち、奥行方向)から見た前方車両及び虚像の表示位置を示している。 4 (a) and 5 (a) schematically show the positions of the host vehicle and the forward vehicle as viewed from above in the y-axis direction (that is, the height direction), and the display position of the virtual image. 4B and 5B show the display positions of the forward vehicle and the virtual image viewed from the z-axis direction (that is, the depth direction) that is the direction viewed from an observer such as a driver of the own vehicle. .
 なお、ここでは、自車の最前部からの距離が7mの位置が表示領域V1、4mの位置が表示領域V2、2mの位置が表示領域V3である例を示し、自車をOV、前方車両をFVとして示している。また、自車は交差点に接近中であり、表示領域V1には歩行者の存在及び位置を示す虚像が表示され、表示領域V3にはガソリンスタンドの位置を示す虚像が表示されている。 Here, an example is shown in which the position where the distance from the front of the host vehicle is 7 m is the display area V1, the position of 4m is the display area V2, and the position of 2m is the display area V3. Is shown as FV. In addition, the vehicle is approaching the intersection, a virtual image indicating the presence and position of a pedestrian is displayed in the display area V1, and a virtual image indicating the position of the gas station is displayed in the display area V3.
 例えば、自車が交差点で左折する予定である場合、表示装置10は左折を案内する矢印を虚像として表示する。その際、虚像位置設定部14は、左折の対象である交差点に近い位置を虚像の表示位置として算出する。従って、表示領域V1(7mの位置)が虚像の表示位置として選択される。 For example, when the vehicle is scheduled to turn left at an intersection, the display device 10 displays an arrow for guiding the left turn as a virtual image. At that time, the virtual image position setting unit 14 calculates a position close to the intersection that is the object of the left turn as the display position of the virtual image. Accordingly, the display area V1 (7 m position) is selected as the display position of the virtual image.
 図4(a)に示すように、自車OVと前方車両FVの最後部との距離が4m以上7m未満である場合、虚像の表示位置が前方車両FVの車体位置と重なることになる。従って、図4(b)に示すように、観察者(例えば、自車OVのドライバ)から見ると虚像は前方車両FVの車体にめり込んだような状態で表示される。 As shown in FIG. 4A, when the distance between the host vehicle OV and the rearmost part of the front vehicle FV is 4 m or more and less than 7 m, the display position of the virtual image overlaps the vehicle body position of the front vehicle FV. Therefore, as shown in FIG. 4B, when viewed from the observer (for example, the driver of the own vehicle OV), the virtual image is displayed in a state where the virtual image is embedded in the vehicle body of the front vehicle FV.
 奥行方向における虚像の表示位置が変更されると、図5(a)に示すように、前方車両FVの最後部よりも手前に位置する表示領域V2(4mの位置)が虚像の表示位置として選択される。これにより、虚像は前方車両の車体にめり込んだ位置ではなく、図5(b)に示すように、前方車両の車体に重ならない手前の位置に表示される。 When the display position of the virtual image in the depth direction is changed, as shown in FIG. 5A, the display region V2 (position of 4 m) positioned before the rearmost part of the forward vehicle FV is selected as the display position of the virtual image. Is done. As a result, the virtual image is not displayed at a position where it is recessed into the vehicle body of the preceding vehicle, but is displayed at a position in front of it that does not overlap the vehicle body of the preceding vehicle, as shown in FIG.
 以上のように、本実施例の表示装置10は、表示している虚像と風景内において重なりかつ虚像よりも手前に位置する物体を検出した場合、現在表示している位置での虚像の表示を停止し、物体よりも手前の位置に虚像を表示する。また、本実施例の表示装置10は、虚像の表示内容に応じて設定された虚像の表示位置と風景内において重なりかつ当該表示位置よりも手前に位置する物体を検出した場合には、当該物体よりも手前の位置を虚像の表示位置として再設定する。そして、表示装置10は、案内対象物や注意喚起の対象物(例えば、店舗や交差点)毎にこれらの処理を行う。従って、例えば自車の移動等に伴い案内対象物や注意喚起の対象物が変わる場合には、これらの処理が繰り返し行われる。 As described above, when the display device 10 according to the present embodiment detects an object that overlaps the displayed virtual image in the landscape and is positioned in front of the virtual image, the display device 10 displays the virtual image at the currently displayed position. It stops and displays a virtual image at a position in front of the object. In addition, when the display device 10 of the present embodiment detects an object that overlaps the display position of the virtual image set according to the display content of the virtual image in the landscape and is positioned in front of the display position, The position in front of is reset as the virtual image display position. And the display apparatus 10 performs these processes for every guidance target object or target object (for example, a store or an intersection). Therefore, for example, when the guidance object or the alert object changes with the movement of the host vehicle, these processes are repeated.
 これにより、前方車両等の障害物よりも手前に虚像が表示されるため、観察者(例えば、自車のドライバ)は虚像及び障害物が重なって見えることによる違和感を覚えることなく、虚像を視認することができる。従って、本実施例の表示装置10によれば、風景内に障害物が存在する場合にも違和感のない虚像表示を行うことが可能となる。 As a result, the virtual image is displayed in front of the obstacle such as the vehicle ahead, so that the observer (for example, the driver of the own vehicle) can visually recognize the virtual image without feeling discomfort due to the virtual image and the obstacle appearing to overlap each other. can do. Therefore, according to the display device 10 of the present embodiment, it is possible to display a virtual image without a sense of incongruity even when an obstacle exists in the landscape.
 実施例2の表示装置について説明する。本実施例の表示装置は、図1(a)に示す実施例1の表示装置10の構成と同様の装置構成を有する。すなわち、本実施例の表示装置10は、光源20、スクリーン群30、反射部材40、画像データ生成部50、駆動部60及び制御部70を含む。 The display device of Example 2 will be described. The display device of the present embodiment has the same device configuration as that of the display device 10 of the first embodiment shown in FIG. That is, the display device 10 according to the present embodiment includes the light source 20, the screen group 30, the reflection member 40, the image data generation unit 50, the drive unit 60, and the control unit 70.
 また、本実施例の表示装置10の制御部70は、図1(b)に示す実施例1の制御部70と同様の構成を有する。すなわち、本実施例の制御部70は、自車位置検出部11、前方検出部12、相対位置算出部13、虚像位置設定部14及び記憶部15を含む。 Further, the control unit 70 of the display device 10 of the present embodiment has the same configuration as the control unit 70 of the first embodiment shown in FIG. That is, the control unit 70 of the present embodiment includes a host vehicle position detection unit 11, a front detection unit 12, a relative position calculation unit 13, a virtual image position setting unit 14, and a storage unit 15.
 本実施例において、前方検出部12は、3次元の距離を精度よく測定することが可能な測定装置から構成されていることが好ましい。例えば、本実施例の前方検出部12は、物体に向けてレーザ光を照射して当該物体までの距離を測定するLiDAR(Light Detection and Ranging)装置から構成されている。 In the present embodiment, it is preferable that the front detector 12 is composed of a measuring device capable of measuring a three-dimensional distance with high accuracy. For example, the front detection unit 12 of the present embodiment is configured by a LiDAR (Light Detection and Ranging) device that measures the distance to the object by irradiating the object with laser light.
 本実施例の表示装置10では、虚像の表示位置が前方車両等の障害物の位置と重なり、且つ虚像の一部の表示位置が障害物の表面よりも奥側の位置にある場合、当該虚像の一部の表示態様を変更する。具体的には、画像データ生成部50は、実施例1のように虚像位置設定部14が虚像全体の表示位置を変更する代わりに、障害物と重なる虚像のうち障害物よりも奥側に位置する部分の表示態様を変更するための新たな画像データVDを生成する。例えば、画像データ生成部50は、障害物よりも奥側に表示される部分の表示態様が障害物よりも手前に表示される部分の表示態様(標準態様と称する)よりも目立たない表示態様(非標準態様と称する)となるように、虚像の画像データVDを生成する。 In the display device 10 of the present embodiment, when the display position of the virtual image overlaps with the position of an obstacle such as a forward vehicle, and the display position of a part of the virtual image is at the back side of the surface of the obstacle, the virtual image The display mode of a part of is changed. Specifically, instead of the virtual image position setting unit 14 changing the display position of the entire virtual image as in the first embodiment, the image data generation unit 50 is positioned behind the obstacle in the virtual image overlapping the obstacle. New image data VD for changing the display mode of the portion to be generated is generated. For example, the image data generation unit 50 displays a display mode in which the display mode of the part displayed on the back side of the obstacle is less conspicuous than the display mode (referred to as a standard mode) of the part displayed in front of the obstacle. The virtual image data VD is generated so as to obtain a non-standard mode.
 目立たない表示態様(非標準態様)とは、例えば標準態様よりも輝度が低い表示態様や、虚像の輪郭のみを破線等で表示することにより画像の面積(描画される面積)を減らした表示態様である。このような表示態様への変更を行うことにより、観察者(例えば、自車OVの運転者)にとって虚像の一部が障害物と空間的に重なっているように見える(すなわち、虚像の一部が障害物にめり込んで見える)ことによる違和感が緩和される。 An inconspicuous display mode (non-standard mode) is, for example, a display mode with lower brightness than the standard mode, or a display mode in which the area of the image (the area to be drawn) is reduced by displaying only the outline of the virtual image with a broken line or the like It is. By making such a change to the display mode, the observer (for example, the driver of the own vehicle OV) seems to partially overlap the virtual image with the obstacle (that is, a part of the virtual image). The incongruity caused by the fact that the object appears to be embedded in the obstacle is alleviated.
 次に、本実施例の表示装置10が実行する虚像表示処理の処理動作について、新たに虚像の表示を行う場合を例として、図6のフローチャートを参照して説明する。 Next, the processing operation of the virtual image display process executed by the display device 10 according to the present embodiment will be described with reference to the flowchart of FIG. 6 by taking as an example the case of newly displaying a virtual image.
 自車位置取得部11は、自車の現在位置を示す位置情報を取得する(ステップS101)。 The own vehicle position acquisition unit 11 acquires position information indicating the current position of the own vehicle (step S101).
 相対位置算出部13は、記憶部15から読み出した自車周辺の地図情報を参照し(ステップS102)、自車と案内対象物(例えば、交差点や店舗)との相対位置を算出する(ステップS103)。 The relative position calculation unit 13 refers to the map information around the host vehicle read from the storage unit 15 (step S102), and calculates the relative position between the host vehicle and the guidance object (for example, an intersection or a store) (step S103). ).
 虚像位置設定部14は、相対位置算出部13が算出した相対位置に基づいて、風景内において案内対象物と虚像とが重なるような虚像の表示位置を算出する(ステップS104)。 The virtual image position setting unit 14 calculates the display position of the virtual image such that the guidance object and the virtual image overlap in the landscape based on the relative position calculated by the relative position calculation unit 13 (step S104).
 前方検出部12は、自車の前方の所定距離の範囲内にある障害物(例えば、前方車両)を検出する(ステップS105)。 The front detection unit 12 detects an obstacle (for example, a front vehicle) within a predetermined distance range in front of the host vehicle (step S105).
 相対位置算出部13は、当該範囲内に前方車両等の障害物があるか否かを判定する(ステップS106)。障害物がないと判定すると(ステップS106:No)、ステップS303に進む。 The relative position calculation unit 13 determines whether there is an obstacle such as a forward vehicle in the range (step S106). If it determines with there being no obstruction (step S106: No), it will progress to step S303.
 障害物があると判定すると(ステップS106:Yes)、相対位置算出部13は、障害物の各部位までの距離を算出する(ステップS301)。 If it is determined that there is an obstacle (step S106: Yes), the relative position calculation unit 13 calculates the distance to each part of the obstacle (step S301).
 画像データ生成部50は、相対位置算出部13により算出された障害物の各部位までの距離と、ステップS104で虚像位置設定部14により算出された奥行き方向の虚像表示位置と、生成中の画像データVDとに基づいて、障害物の表面が虚像の表示位置の一部よりも手前に位置しているか否かを判定する(ステップS302)。 The image data generation unit 50 calculates the distance to each part of the obstacle calculated by the relative position calculation unit 13, the virtual image display position in the depth direction calculated by the virtual image position setting unit 14 in step S104, and the image being generated Based on the data VD, it is determined whether or not the surface of the obstacle is positioned in front of a part of the display position of the virtual image (step S302).
 障害物の表面が虚像の表示位置の一部よりも手前に位置していないと判定された場合(ステップS302:No)、虚像位置設定部14は、ステップS104で算出した表示位置を虚像の表示位置に設定する。画像データ生成部50は、虚像全体を標準態様で表示するための画像データVDを生成する。駆動部60は、画像データVDに基づいて光源20を駆動する。これにより、案内対象物に近い位置を虚像の表示位置として、虚像が標準態様で表示される(ステップS303)。 When it is determined that the surface of the obstacle is not positioned in front of a part of the display position of the virtual image (step S302: No), the virtual image position setting unit 14 displays the display position calculated in step S104 as a virtual image. Set to position. The image data generation unit 50 generates image data VD for displaying the entire virtual image in a standard manner. The drive unit 60 drives the light source 20 based on the image data VD. As a result, the virtual image is displayed in a standard manner with the position close to the guidance object as the display position of the virtual image (step S303).
 一方、障害物の表面が虚像の表示位置の一部よりも手前に位置していると判定された場合(ステップS302:Yes)、虚像位置設定部14は、ステップS104で算出した表示位置を虚像の表示位置に設定する。画像データ生成部50は、虚像のうち障害物の表面よりも奥側(すなわち、障害物の自車に近い側の表面よりも奥側)に位置する部分を非標準態様で表示し、それ以外の部分を標準態様で表示するための画像データVDを生成する。駆動部60は、画像データVDに基づいて光源20を駆動する。これにより、案内対象物に近い位置を虚像の表示位置として、虚像のうち障害物の表面よりも手前に表示される部分が標準態様で表示され、障害物の表面よりも奥側に表示される部分が非標準態様で表示される(ステップS304)。 On the other hand, when it is determined that the surface of the obstacle is positioned before a part of the display position of the virtual image (step S302: Yes), the virtual image position setting unit 14 uses the display position calculated in step S104 as the virtual image. Set to the display position. The image data generation unit 50 displays, in a non-standard manner, a portion of the virtual image that is located on the back side of the obstacle surface (that is, the back side of the obstacle on the side closer to the vehicle). The image data VD for displaying the part in the standard mode is generated. The drive unit 60 drives the light source 20 based on the image data VD. Thereby, the position near the guidance object is set as the virtual image display position, and the portion of the virtual image that is displayed in front of the surface of the obstacle is displayed in a standard manner, and is displayed behind the surface of the obstacle. The part is displayed in a non-standard manner (step S304).
 以上の処理により、本実施例の表示装置10は、障害物と重なって表示される虚像のうち障害物の表面よりも奥側に表示される部分を、障害物の表面よりも手前に表示される部分の表示態様よりも目立たない表示態様(非標準態様)で表示する。 Through the above processing, the display device 10 according to the present embodiment displays the portion of the virtual image displayed overlapping the obstacle, which is displayed on the back side of the obstacle surface, in front of the obstacle surface. Display in a display mode (non-standard mode) that is less conspicuous than the display mode of the part.
 図7(a)及び(b)は、虚像の表示位置と部分的に重なる位置に障害物としての前方車両が存在する場合の虚像の表示例を示す図である。図7(a)は、x軸方向(すなわち、横方向)から見た自車及び前方車両の位置と及び虚像の表示位置とを模式的に示している。図7(b)は、自車のドライバ等の観察者から見た方向であるz軸方向(すなわち、奥行方向)の前方車両及び虚像の表示位置を示している。 FIGS. 7A and 7B are diagrams illustrating display examples of a virtual image when a forward vehicle as an obstacle is present at a position partially overlapping with the display position of the virtual image. FIG. 7A schematically shows the positions of the host vehicle and the preceding vehicle and the display position of the virtual image viewed from the x-axis direction (that is, the lateral direction). FIG. 7B shows the display position of the front vehicle and the virtual image in the z-axis direction (that is, the depth direction) that is the direction seen from the observer such as the driver of the own vehicle.
 図7(a)に示すように、自車OVから見て前方車両FVの最後部よりも奥側に虚像の表示位置がある場合、例えば虚像の下側部分が前方車両FVの車体と重なることになる。そこで、本実施例の表示装置10は、当該虚像の下側部分を目立たない態様(非標準態様)で表示する。これにより、図7(b)に示すように、観察者(例えば、自車OVのドライバ)から見ると、虚像の下側部分は例えば輪郭のみが破線で表示される。 As shown in FIG. 7A, when the virtual image display position is behind the rearmost part of the forward vehicle FV when viewed from the own vehicle OV, for example, the lower part of the virtual image overlaps the vehicle body of the forward vehicle FV. become. Therefore, the display device 10 according to the present embodiment displays the lower part of the virtual image in an inconspicuous manner (non-standard manner). Accordingly, as shown in FIG. 7B, when viewed from an observer (for example, a driver of the own vehicle OV), only the outline of the lower portion of the virtual image is displayed with a broken line, for example.
 [変形例]
 本実施例の変形例として、実施例2の図1(a)に示す構成の表示装置10に代えて、虚像の上端部と下端部とで見かけ上の奥行方向の距離を異ならせることが可能な構成を有する表示装置を用いても良い。
[Modification]
As a modification of the present embodiment, instead of the display device 10 having the configuration shown in FIG. A display device having a simple structure may be used.
 図8は、かかる構成を有する変形例の表示装置10Aの構成を模式的に示す図である。表示装置10Aは、スクリーン群30の構成において、図1(a)の表示装置10と異なる。 FIG. 8 is a diagram schematically showing a configuration of a display device 10A according to a modified example having such a configuration. The display device 10A is different from the display device 10 in FIG.
 表示装置10Aのスクリーン群30は、スクリーンS1A、S2A及びS3Aを有する。スクリーンS1A、S2A及びS3Aは、表示装置10のスクリーンS1~S3と同様、透過状態と散乱状態とに状態を変化させる状態可変型のスクリーンとして構成されている。 The screen group 30 of the display device 10A includes screens S1A, S2A, and S3A. The screens S1A, S2A, and S3A are configured as state-variable screens that change the state between a transmission state and a scattering state, like the screens S1 to S3 of the display device 10.
 スクリーンS1A、S2A及びS3Aは、光源20から反射部材40に向けて照射されるレーザ光の光軸に対して法線が傾くように、互いに平行に配置されている。また、スクリーンS1A、S2A及びS3Aの各々の上端から反射部材40までの距離は、各々の下端から反射部材40までの距離よりも大きい。 The screens S1A, S2A, and S3A are arranged in parallel to each other so that the normal line is inclined with respect to the optical axis of the laser light emitted from the light source 20 toward the reflecting member 40. Further, the distances from the upper ends of the screens S1A, S2A, and S3A to the reflecting member 40 are larger than the distances from the respective lower ends to the reflecting member 40.
 このようにレーザ光の光軸に対してスクリーンS1A、S2A及びS3Aを傾けて配置した場合、図8に示すように、それぞれのスクリーンに対応する虚像V1A~V3Aとして、上端側が奥側(すなわち、z軸方向における前方)に位置する虚像、すなわち上端側が前方に傾いた虚像が観察者により視認される。 When the screens S1A, S2A, and S3A are thus tilted with respect to the optical axis of the laser light, as shown in FIG. 8, the virtual images V1A to V3A corresponding to the respective screens have their upper ends on the back side (that is, An observer visually recognizes a virtual image located in the front in the z-axis direction, that is, a virtual image whose upper end side is inclined forward.
 本変形例の表示装置10Aは、図6のフローチャートに示す処理動作により虚像表示処理を実行する。 The display device 10A according to this modification performs a virtual image display process by the processing operation shown in the flowchart of FIG.
 本変形例の画像データ生成部50は、ステップS104で虚像位置設定部14が設定可能な奥行方向の虚像表示位置ごとに、画像データVDの上下方向の位置と奥行方向の虚像表示位置との対応関係を予め記憶している。そして、画像データ生成部50は、相対位置算出部13により算出された障害物の各部位までの距離と、虚像位置設定部14により算出された奥行き方向の虚像表示位置と、記憶している画像データVDの上下方向の位置と奥行方向の虚像表示位置との対応関係に基づいて、ステップS302の判定を行う。 The image data generation unit 50 of the present modification corresponds to the vertical position of the image data VD and the virtual image display position in the depth direction for each virtual image display position in the depth direction that can be set by the virtual image position setting unit 14 in step S104. The relationship is stored in advance. Then, the image data generation unit 50 stores the distance to each part of the obstacle calculated by the relative position calculation unit 13, the virtual image display position in the depth direction calculated by the virtual image position setting unit 14, and the stored image. The determination in step S302 is performed based on the correspondence between the vertical position of the data VD and the virtual image display position in the depth direction.
 本変形例の表示装置10Aによれば、例えばステップS302において障害物の表面が虚像の表示位置の一部よりも手前に位置していると判定された場合、ステップS304において、虚像のうち障害物の表面よりも手前側の部分が標準態様で表示され、奥側の部分が非標準態様で表示されるとともに、上端側が前方に傾いた虚像が表示される。 According to the display device 10A of the present modification, for example, when it is determined in step S302 that the surface of the obstacle is positioned in front of a part of the display position of the virtual image, the obstacle in the virtual image is determined in step S304. A portion on the near side of the surface is displayed in a standard manner, a portion on the far side is displayed in a non-standard manner, and a virtual image with the upper end inclined forward is displayed.
 図9(a)及び(b)は、虚像の表示位置と部分的に重なる位置に障害物としての前方車両が存在する場合の本変形例の表示装置10Aによる虚像の表示例を示す図である。図9(a)は、x軸方向(すなわち、横方向)から見た自車及び前方車両の位置と及び虚像の表示位置とを模式的に示している。図9(b)は、自車のドライバ等の観察者から見た方向であるz軸方向(すなわち、奥行方向)の前方車両及び虚像の表示位置を示している。 FIGS. 9A and 9B are diagrams illustrating a display example of a virtual image by the display device 10 </ b> A of the present modification example when the vehicle ahead is present as an obstacle at a position that partially overlaps the display position of the virtual image. . FIG. 9A schematically shows the positions of the host vehicle and the preceding vehicle and the display position of the virtual image viewed from the x-axis direction (that is, the lateral direction). FIG. 9B shows the display position of the front vehicle and the virtual image in the z-axis direction (that is, the depth direction) that is the direction seen from the observer such as the driver of the own vehicle.
 図9(a)に示すように、虚像は上端側が前方に傾いて表示されるため、自車OVから見て前方車両FVの最後部よりも奥側に虚像の表示位置がある場合、例えば虚像の上側部分が目立たない態様(非標準態様)で表示される。図9(b)に示すように、観察者(例えば、自車OVのドライバ)から見ると、虚像の上側部分は例えば輪郭のみが破線で表示される。 As shown in FIG. 9A, the virtual image is displayed with the upper end side tilted forward, and therefore when the virtual image is displayed behind the rearmost part of the forward vehicle FV when viewed from the host vehicle OV, for example, a virtual image. Is displayed in a mode (non-standard mode) in which the upper part of the screen is not conspicuous. As shown in FIG. 9B, when viewed from an observer (for example, the driver of the own vehicle OV), for example, only the outline of the upper portion of the virtual image is displayed by a broken line.
 なお、上記の本実施例及びその変形例では、虚像のうち障害物よりも奥側に表示される部分について、例えば当該部分の輪郭を破線で表示する等の目立たない表示態様(非標準態様)で表示する場合について説明した。しかし、これに代えて当該部分を表示しない(すなわち、当該部分の表示を停止する)ようにしても良い。すなわち、障害物の位置と虚像の表示位置とが重なり、障害物の表面よりも奥に表示される部分と手前に表示される部分とが生じた場合に、観察者(例えば、自車OVのドライバ)による前方の視認の妨げとならないように、障害物の表面よりも奥に表示される部分の虚像の表示態様を手前に表示される部分の虚像の表示態様と異ならせるものであれば良い。 In addition, in the above-described embodiment and its modified examples, for a portion of the virtual image that is displayed behind the obstacle, for example, an inconspicuous display mode (non-standard mode) such as displaying the outline of the portion with a broken line The case where it is displayed in was explained. However, instead of this, the part may not be displayed (that is, the display of the part is stopped). That is, when the position of the obstacle overlaps with the display position of the virtual image, and there is a part displayed behind the surface of the obstacle and a part displayed in front, an observer (for example, the vehicle OV) The display mode of the virtual image of the portion displayed behind the surface of the obstacle may be different from the display mode of the virtual image of the portion displayed in front so as not to hinder the forward visual recognition by the driver). .
 以上のように、本実施例の表示装置では、虚像の前方車両等の障害物の表面よりも奥に表示される部分の虚像を目立たない態様で表示し、又は当該部分の虚像を表示されないようにする。これにより、観察者(例えば、自車OVのドライバ)は虚像及び障害物が重なって見えることにより生じる違和感を覚えることなく、虚像を視認することができる。従って、本実施例の表示装置によれば、風景内に障害物が存在する場合にも違和感を低減した虚像表示を行うことが可能となる。 As described above, in the display device according to the present embodiment, the virtual image of the portion displayed behind the obstacle such as the vehicle ahead of the virtual image is displayed in an inconspicuous manner, or the virtual image of the portion is not displayed. To. Thereby, the observer (for example, the driver of the own vehicle OV) can visually recognize the virtual image without feeling uncomfortable feeling caused by the virtual image and the obstacle appearing to overlap each other. Therefore, according to the display device of the present embodiment, it is possible to perform virtual image display with reduced discomfort even when there are obstacles in the landscape.
 なお、本発明の実施形態は、上記実施例で示したものに限られない。例えば、上記実施例では、表示している虚像の位置と重なり且つ虚像よりも手前の位置に前方車両等の障害物が検出された場合に、当該障害物よりも手前の位置に虚像の表示位置を変更する例について説明した。しかし、表示している虚像の位置と重なる位置に前方車両等の障害物が検出された場合には、単に当該虚像の表示(AR表示)を停止する構成であっても良い。また、表示位置の変更後に、変更前の虚像と全く同じ虚像ではなく表示形態を変えた虚像を表示しても良い。表示位置の変更を奥行方向(z軸方向)について行うのではなく、平面方向(x軸方向又はy軸方向)に虚像の位置を変えることにより、虚像が障害物と重ならないように表示しても良い。 Note that the embodiments of the present invention are not limited to those shown in the above examples. For example, in the above embodiment, when an obstacle such as a forward vehicle is detected at a position that is in front of the virtual image and overlaps with the position of the virtual image being displayed, the display position of the virtual image is at a position in front of the obstacle. The example which changes is demonstrated. However, when an obstacle such as a forward vehicle is detected at a position overlapping the position of the displayed virtual image, the display of the virtual image (AR display) may be simply stopped. Further, after the display position is changed, a virtual image in which the display form is changed may be displayed instead of the same virtual image as that before the change. Instead of changing the display position in the depth direction (z-axis direction), change the position of the virtual image in the plane direction (x-axis direction or y-axis direction) so that the virtual image does not overlap the obstacle. Also good.
 また、上記実施例では、表示装置10がスクリーンS1~S3を有し、奥行方向の距離が異なる表示領域V1~V3に虚像を表示する例について説明した。しかし、スクリーン及び表示領域の数はこれに限られず、2又は4以上のスクリーンを有し2又は4以上の表示領域に虚像を表示する構成とすることが可能である。また、多眼視やホログラフィによる立体表示により虚像を表示する構成であっても良い。 In the above-described embodiment, the example in which the display device 10 has the screens S1 to S3 and displays virtual images in the display areas V1 to V3 having different distances in the depth direction has been described. However, the number of screens and display areas is not limited to this, and it is possible to have two or four or more screens and display a virtual image in two or four or more display areas. Moreover, the structure which displays a virtual image by the multi-view and the three-dimensional display by holography may be sufficient.
 また、上記実施例では、虚像を投影するスクリーンをスクリーンS1~S3に切り替えることにより虚像の奥行方向の表示位置を変更する例について説明した。しかし、スクリーンの位置を移動させることにより虚像の表示位置を変更する構成であっても良い。 In the above embodiment, the example in which the display position in the depth direction of the virtual image is changed by switching the screen on which the virtual image is projected to the screens S1 to S3 has been described. However, the display position of the virtual image may be changed by moving the screen position.
 また、上記各実施例で説明した一連の処理は、例えばROM(Read Only Memory)などの記録媒体に格納されたプログラムに従ったコンピュータ処理により行うことができる。 The series of processes described in the above embodiments can be performed by computer processing according to a program stored in a recording medium such as a ROM (Read Only Memory).
10 表示装置
11 自車位置取得部
12 前方検出部
13 相対位置算出部
14 虚像位置設定部
15 記憶部
20 光源
30 スクリーン群
S1,S2,S3 スクリーン
40 反射部材
50 画像データ生成部
60 駆動部
70 制御部
10A 表示装置
S1A,S2A,S3A スクリーン
DESCRIPTION OF SYMBOLS 10 Display apparatus 11 Own vehicle position acquisition part 12 Front detection part 13 Relative position calculation part 14 Virtual image position setting part 15 Storage part 20 Light source 30 Screen group S1, S2, S3 Screen 40 Reflective member 50 Image data generation part 60 Drive part 70 Control Part 10A Display device S1A, S2A, S3A Screen

Claims (9)

  1.  風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示部と、
     前記風景内の物体の位置を取得する物体位置取得部と、
     を含み、
     前記表示部は、前記物体位置取得部が取得した物体の位置が現在表示している1の虚像と重なりかつ前記1の虚像よりも手前に位置する場合に、前記1の虚像の表示を停止することを特徴とする表示装置。
    A display unit that displays a virtual image at at least one of a plurality of positions having different depths in the landscape;
    An object position acquisition unit for acquiring the position of the object in the landscape;
    Including
    The display unit stops the display of the first virtual image when the position of the object acquired by the object position acquisition unit overlaps the currently displayed virtual image and is positioned in front of the first virtual image. A display device characterized by that.
  2.  前記表示部は、前記検出部が前記風景内において前記1の虚像と重なりかつ前記1の虚像よりも手前に位置する物体を検出した場合には、前記1の虚像の表示を停止するとともに、前記物体よりも手前の位置に前記1の虚像に基づく虚像を表示することを特徴とする請求項1に記載の表示装置。 When the detection unit detects an object that overlaps the one virtual image and is positioned in front of the one virtual image in the landscape, the display unit stops displaying the first virtual image, and The display device according to claim 1, wherein a virtual image based on the first virtual image is displayed at a position before the object.
  3.  前記虚像の表示内容に応じて前記虚像の表示位置を設定し、設定した表示位置で前記表示部に前記虚像を表示させる表示制御部を含み、
     前記表示制御部は、1の虚像について設定した表示位置と前記風景内において重なりかつ前記表示位置よりも手前に位置する物体を前記検出部が検出した場合には、前記物体よりも手前の位置を前記1の虚像の表示位置として再設定することを特徴とする請求項1又は2に記載の表示装置。
    A display control unit that sets the display position of the virtual image according to the display content of the virtual image, and displays the virtual image on the display unit at the set display position;
    When the detection unit detects an object that overlaps the display position set for one virtual image in the landscape and is positioned in front of the display position, the display control unit determines the position in front of the object. The display device according to claim 1, wherein the display position is reset as the display position of the first virtual image.
  4.  前記表示部は、前記風景内の第1の表示領域に虚像を表示するための第1のスクリーンと、前記風景内において前記第1の表示領域よりも手前に位置する第2の表示領域に虚像を表示するための第2のスクリーンと、を含み、
     前記第1の表示領域に表示している虚像と重なりかつ前記第1の表示領域よりも手前に位置する物体を前記検出部が検出した場合には、前記表示部は前記第1の表示領域での前記虚像の表示を停止して前記第2の表示領域に前記虚像を表示することを特徴とする請求項1に記載の表示装置。
    The display unit includes a first screen for displaying a virtual image in a first display area in the landscape, and a virtual image in a second display area positioned in front of the first display area in the landscape. A second screen for displaying
    When the detection unit detects an object that overlaps the virtual image displayed in the first display region and is positioned in front of the first display region, the display unit is the first display region. The display device according to claim 1, wherein the display of the virtual image is stopped and the virtual image is displayed in the second display area.
  5.  虚像を表示可能な表示装置が実行する表示方法であって、
     風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、
     前記風景内の物体の位置を取得する物体位置取得ステップと、
     前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像と重なりかつ前記1の虚像よりも手前に位置する場合に、前記1の虚像の表示を停止する停止ステップと、
     を実行することを特徴とする表示方法。
    A display method executed by a display device capable of displaying a virtual image,
    A display step of displaying a virtual image at at least one of a plurality of positions having different depths in the landscape;
    An object position acquisition step of acquiring the position of the object in the landscape;
    A stop step of stopping the display of the first virtual image when the position of the object acquired in the object position acquisition step overlaps the currently displayed virtual image and is positioned in front of the first virtual image;
    The display method characterized by performing.
  6.  虚像を表示可能な表示装置が有するコンピュータに、
     風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、
     前記風景内の物体の位置を取得する物体位置取得ステップと、
     前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像と重なりかつ前記1の虚像よりも手前に位置する場合に、前記1の虚像の表示を停止する停止ステップと、
     を実行させることを特徴とするプログラム。
    In a computer having a display device capable of displaying a virtual image,
    A display step of displaying a virtual image at at least one of a plurality of positions having different depths in the landscape;
    An object position acquisition step of acquiring the position of the object in the landscape;
    A stop step of stopping the display of the first virtual image when the position of the object acquired in the object position acquisition step overlaps the currently displayed virtual image and is positioned in front of the first virtual image;
    A program characterized by having executed.
  7.  風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示部と、
     前記風景内の物体の位置を取得する物体位置取得部と、
     を含み、
     前記表示部は、前記物体位置取得部が取得した物体の位置が現在表示している1の虚像の一部と前記風景内の奥行方向において重なる場合に、前記1の虚像のうち前記物体の表面よりも奥に位置する部分を前記物体の表面より手前に位置する部分よりも目立たない表示態様で表示し、又は前記1の虚像のうち前記物体の表面よりも奥に位置する部分の表示を停止する、
     ことを特徴とする表示装置。
    A display unit that displays a virtual image at at least one of a plurality of positions having different depths in the landscape;
    An object position acquisition unit for acquiring the position of the object in the landscape;
    Including
    The display unit has a surface of the object out of the one virtual image when the position of the object acquired by the object position acquisition unit overlaps a part of the one virtual image currently displayed in the depth direction in the landscape. The part located deeper than the surface of the object is displayed in a less conspicuous display mode than the part located in front of the surface of the object, or the display of the part located behind the surface of the object in the virtual image of the first is stopped To
    A display device characterized by that.
  8.  虚像を表示可能な表示装置が実行する表示方法であって、
     風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、
     前記風景内の物体の位置を取得する物体位置取得ステップと、
     前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像の一部と前記風景内の奥行方向において重なる場合に、前記1の虚像のうち前記物体の表面よりも奥に位置する部分を前記物体の表面より手前に位置する部分よりも目立たない表示態様で表示し、又は前記1の虚像のうち前記物体の表面よりも奥に位置する部分の表示を停止する表示変更ステップと、
     を実行することを特徴とする表示方法。
    A display method executed by a display device capable of displaying a virtual image,
    A display step of displaying a virtual image at at least one of a plurality of positions having different depths in the landscape;
    An object position acquisition step of acquiring the position of the object in the landscape;
    When the position of the object acquired in the object position acquisition step overlaps a part of the currently displayed virtual image in the depth direction in the landscape, the virtual image is positioned behind the surface of the object in the virtual image A display changing step for displaying a portion to be displayed in a display mode that is less conspicuous than a portion positioned in front of the surface of the object, or for stopping display of a portion positioned behind the surface of the object in the virtual image of the one ,
    The display method characterized by performing.
  9.  虚像を表示可能な表示装置が有するコンピュータに、
     風景内の互いに奥行きの異なる複数の位置のうち少なくとも1の位置に虚像を表示する表示ステップと、
     前記風景内の物体の位置を取得する物体位置取得ステップと、
     前記物体位置取得ステップで取得した物体の位置が現在表示している1の虚像の一部と前記風景内の奥行方向において重なる場合に、前記1の虚像のうち前記物体の表面よりも奥に位置する部分を前記物体の表面より手前に位置する部分よりも目立たない表示態様で表示し、又は前記1の虚像のうち前記物体の表面よりも奥に位置する部分の表示を停止する表示変更ステップと、
     を実行させることを特徴とするプログラム。
    In a computer having a display device capable of displaying a virtual image,
    A display step of displaying a virtual image at at least one of a plurality of positions having different depths in the landscape;
    An object position acquisition step of acquiring the position of the object in the landscape;
    When the position of the object acquired in the object position acquisition step overlaps a part of the currently displayed virtual image in the depth direction in the landscape, the virtual image is positioned behind the surface of the object in the virtual image A display changing step for displaying a portion to be displayed in a display mode that is less conspicuous than a portion positioned in front of the surface of the object, or for stopping display of a portion positioned behind the surface of the object in the virtual image of the one ,
    A program characterized by having executed.
PCT/JP2018/007001 2017-03-14 2018-02-26 Display device, display method, and program WO2018168418A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019505833A JPWO2018168418A1 (en) 2017-03-14 2018-02-26 Display device, display method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-048347 2017-03-14
JP2017048347 2017-03-14

Publications (1)

Publication Number Publication Date
WO2018168418A1 true WO2018168418A1 (en) 2018-09-20

Family

ID=63522034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/007001 WO2018168418A1 (en) 2017-03-14 2018-02-26 Display device, display method, and program

Country Status (2)

Country Link
JP (1) JPWO2018168418A1 (en)
WO (1) WO2018168418A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020135768A (en) * 2019-02-25 2020-08-31 トヨタ自動車株式会社 Vehicle display device
JP2021020518A (en) * 2019-07-25 2021-02-18 株式会社デンソー Vehicular display controller and vehicular display control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0585223A (en) * 1991-09-24 1993-04-06 Fujitsu Ltd Headup display unit
JP2004168230A (en) * 2002-11-21 2004-06-17 Nissan Motor Co Ltd Display device for vehicle
JP2014185926A (en) * 2013-03-22 2014-10-02 Aisin Aw Co Ltd Guidance display system
WO2015118859A1 (en) * 2014-02-05 2015-08-13 パナソニックIpマネジメント株式会社 Display device for vehicle and display method of display device for vehicle
JP2016118423A (en) * 2014-12-19 2016-06-30 アイシン・エィ・ダブリュ株式会社 Virtual image display device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002157607A (en) * 2000-11-17 2002-05-31 Canon Inc System and method for image generation, and storage medium
JP2013196492A (en) * 2012-03-21 2013-09-30 Toyota Central R&D Labs Inc Image superimposition processor and image superimposition processing method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0585223A (en) * 1991-09-24 1993-04-06 Fujitsu Ltd Headup display unit
JP2004168230A (en) * 2002-11-21 2004-06-17 Nissan Motor Co Ltd Display device for vehicle
JP2014185926A (en) * 2013-03-22 2014-10-02 Aisin Aw Co Ltd Guidance display system
WO2015118859A1 (en) * 2014-02-05 2015-08-13 パナソニックIpマネジメント株式会社 Display device for vehicle and display method of display device for vehicle
JP2016118423A (en) * 2014-12-19 2016-06-30 アイシン・エィ・ダブリュ株式会社 Virtual image display device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020135768A (en) * 2019-02-25 2020-08-31 トヨタ自動車株式会社 Vehicle display device
JP7272007B2 (en) 2019-02-25 2023-05-12 トヨタ自動車株式会社 Vehicle display control device, vehicle display device, vehicle display control method, and vehicle display control program
JP2021020518A (en) * 2019-07-25 2021-02-18 株式会社デンソー Vehicular display controller and vehicular display control method
JP7263962B2 (en) 2019-07-25 2023-04-25 株式会社デンソー VEHICLE DISPLAY CONTROL DEVICE AND VEHICLE DISPLAY CONTROL METHOD

Also Published As

Publication number Publication date
JPWO2018168418A1 (en) 2020-01-09

Similar Documents

Publication Publication Date Title
US10551619B2 (en) Information processing system and information display apparatus
US10890762B2 (en) Image display apparatus and image display method
JP4886751B2 (en) In-vehicle display system and display method
JP6176478B2 (en) Vehicle information projection system
WO2010029707A4 (en) Image projection system and image projection method
JP6342704B2 (en) Display device
JP4715325B2 (en) Information display device
US10649207B1 (en) Display system, information presentation system, method for controlling display system, recording medium, and mobile body
JP2009246505A (en) Image display apparatus and image display method
JP2010143520A (en) On-board display system and display method
JP6945933B2 (en) Display system
US10339843B2 (en) Display device, display image projecting method and head up display
US20190258057A1 (en) Head-up display
US11945306B2 (en) Method for operating a visual field display device for a motor vehicle
JP2016112984A (en) Virtual image display system for vehicle, and head up display
JP2016107947A (en) Information providing device, information providing method, and control program for providing information
JP2016109645A (en) Information providing device, information providing method, and control program for providing information
WO2018168418A1 (en) Display device, display method, and program
JP2010070117A (en) Image irradiation system and image irradiation method
JP2007047735A (en) Visual information display device and visual information display method
KR20180046567A (en) Apparatus and method for controlling head up display (hud) device in vehicle
US20200152157A1 (en) Image processing unit, and head-up display device provided with same
JP6415968B2 (en) COMMUNICATION DEVICE, WARNING DEVICE, DISPLAY DEVICE, CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP2023017641A (en) Vehicle display control device, vehicle display device, vehicle display control method, and vehicle display control program
WO2018180857A1 (en) Head-up display apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18768211

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019505833

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18768211

Country of ref document: EP

Kind code of ref document: A1